0% found this document useful (0 votes)
49 views16 pages

Face Emotion Detection Opencv

This document summarizes a solution for real-time face emotion detection using an Android application and a convolutional neural network model. The app uses OpenCV's Haar cascade classifier for face detection on camera frames and TensorFlow Lite to classify emotions from detected face regions in real-time. Key aspects include loading a pre-trained model using TensorFlow Lite, defining input size, using GPU acceleration, and detecting faces for emotion classification on camera frames.

Uploaded by

Deepesh Rajpoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views16 pages

Face Emotion Detection Opencv

This document summarizes a solution for real-time face emotion detection using an Android application and a convolutional neural network model. The app uses OpenCV's Haar cascade classifier for face detection on camera frames and TensorFlow Lite to classify emotions from detected face regions in real-time. Key aspects include loading a pre-trained model using TensorFlow Lite, defining input size, using GPU acceleration, and detecting faces for emotion classification on camera frames.

Uploaded by

Deepesh Rajpoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Face emotion analysis using image signal

This report is submitted to


Bundelkhand Institute of Engineering and Technology
For the partial fulfillment for the award of degree of
Bachelor of Technology
in
Electronics and Communication Engineering

By
Deepesh Rajpoot (2100430310023)
Ishant Vaidh(2100430310031)
Satyendra Singh Chauhan(2100430310048)
Under the supervision of
Dr. Atul Kumar Dwivedi

Department of Electronics and Communication Engineering


Bundelkhand Institute of Engineering & Technology
Jhansi
(An Autonomous Institute)
Certificate

This is to certify that the work contained in the thesis entitled “Face Emotion
Detection using image signal” submitted by Deepesh Rajpoot, Ishant Vaidh,
Satyendra Singh Chauhan for the award of the degree of Bachelor of Technology in
Electronics and Communication Engineering to the Bundelkhand Institute of
Engineering and Technology, Jhansi is a record of bonafide research works carried
out by him under my direct supervision and guidance.

I considered that the thesis has reached the standards and fulfilling the
requirements of the rules and regulations relating to the nature of the degree. The
contents embodied in the thesis have not been submitted for the award of any other
degree or diploma in this or any other university.

Dr. Atul Kumar Dwivedi


Assistant Professor
ECE Department
BIET Jhansi
Declaration

We hereby declare that the project work “Face emotion analysis using image signal”
entitled is an authenticated work carried out by us under the guidance of Dr. Atul
Kumar Dwivedi of Electronics and Communication Engineering Department at
Bundelkhand Institute of Engineering and Technology, Jhansi. Information derived
from the other source has been quoted in the text and a list of refrences has been given.

Deepesh Rajpoot (2100430310023)


Ishant Viadh(2100430310031)
Satyendra Singh Chauhan(2100430310048)
Acknowledgement

We would like to express our gratitude towards Dr. Atul Kumar Dwivedi for his
guidance and constant supervision as well as for providing us necessary information
regarding the project and this report. We feel thankful and express our kind gratitude
towards our Director, Head of Department and all faculty members. We would also like
to express our special gratitude and thanks to our parents for giving us constant support
that improved our performance significantly.
ABSTRACT

Emotion detection from facial expressions is an essential task in human-computer


interaction and various fields such as psychology, market research, and entertainment.
This project proposes an innovative AI-based approach for accurately recognizing and
classifying facial emotions in real-time. Leveraging the advancements in computer
vision and machine learning techniques, the proposed system utilizes deep neural
networks to analyze facial features and extract meaningful information.

The project involves several key stages. First, a comprehensive dataset comprising
diverse facial expressions is collected and annotated. This dataset is then used to train a
deep convolutional neural network (CNN), which learns to recognize and extract
discriminative features from facial images. The CNN is fine-tuned through transfer
learning to enhance its performance on the emotion detection task.
Face Emotion Detection using image signal

Introduction:
Emotion plays a fundamental role in human communication and interaction. The ability to accurately
recognize and interpret facial expressions is crucial for understanding emotional states, intentions,
and reactions. Consequently, the field of computer vision and artificial intelligence has witnessed
significant advancements in developing systems that can automatically detect and analyze emotions
from facial expressions. This project aims to contribute to this field by proposing an AI-based
approach for face emotion detection, leveraging deep learning techniques and real-time processing.
Facial emotion detection has diverse applications across multiple domains. In psychology, it can
assist in studying emotional responses, personality traits, and mental health conditions. In marketing,
it can provide valuable insights into consumer preferences and reactions to products and
advertisements. In human-computer interaction, it can enable more natural and intuitive interfaces,
enhancing user experiences. Furthermore, in fields like entertainment and gaming, emotion detection
can create immersive and personalized experiences.
Traditional approaches to facial emotion detection relied on manually designed features and rule-
based algorithms, which often struggled to capture the complexity and variability of facial
expressions. However, recent advancements in deep learning, particularly convolutional neural
networks (CNNs), have revolutionized this field. CNNs excel at automatically learning
discriminative features from raw data, making them well-suited for analyzing facial images and
extracting meaningful information.
The proposed project involves training a CNN model using a comprehensive dataset of annotated
facial expressions.
Real-time processing is another critical aspect of the project. To make the emotion detection system
practical and applicable in real-world scenarios, it needs to operate with minimal delay. The
proposed system will leverage the computational efficiency of deep learning models and optimize
the processing pipeline to achieve near real-time performance, allowing for seamless integration into
various applications and devices.
The evaluation of the proposed system will involve benchmarking it against existing approaches
using standard datasets. Performance metrics such as accuracy, precision, recall, and F1-score will
be utilized to assess the system's effectiveness in detecting and classifying emotions accurately.

Solution:
We make android app that capture image of human continuously and give emotion of
face of human being instantly

Code:-

package com.example.imagepro;

import android.content.Context;
import android.content.res.AssetFileDescriptor;
import android.content.res.AssetManager;
import android.graphics.Bitmap;
import android.util.Log;

import org.opencv.android.Utils;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Array;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.util.Arrays;

public class facialExpressionRecognition {


// define interpreter
// Before this implement tensorflow to build.gradle file
private Interpreter interpreter;
// define input size
private int INPUT_SIZE;
// define height and width of original frame
private int height=0;
private int width=0;
// now define Gpudelegate
// it is use to implement gpu in interpreter
private GpuDelegate gpuDelegate=null;

// now define cascadeClassifier for face detection


private CascadeClassifier cascadeClassifier;
// now call this in CameraActivity
facialExpressionRecognition(AssetManager assetManager, Context context, String
modelPath,int inputSize) throws IOException {
INPUT_SIZE=inputSize;
// set GPU for the interpreter
Interpreter.Options options=new Interpreter.Options();
gpuDelegate=new GpuDelegate();
// add gpuDelegate to option
options.addDelegate(gpuDelegate);
// now set number of threads to options
options.setNumThreads(4); // set this according to your phone
// this will load model weight to interpreter
interpreter=new Interpreter(loadModelFile(assetManager,modelPath),options);
// if model is load print
Log.d("facial_Expression","Model is loaded");

// now we will load haarcascade classifier


try {
// define input stream to read classifier
InputStream
is=context.getResources().openRawResource(R.raw.haarcascade_frontalface_alt);
// create a folder
File cascadeDir=context.getDir("cascade",Context.MODE_PRIVATE);
// now create a new file in that folder
File mCascadeFile=new File(cascadeDir,"haarcascade_frontalface_alt");
// now define output stream to transfer data to file we created
FileOutputStream os=new FileOutputStream(mCascadeFile);
// now create buffer to store byte
byte[] buffer=new byte[4096];
int byteRead;
// read byte in while loop
// when it read -1 that means no data to read
while ((byteRead=is.read(buffer)) !=-1){
// writing on mCascade file
os.write(buffer,0,byteRead);

}
// close input and output stream
is.close();
os.close();
cascadeClassifier=new CascadeClassifier(mCascadeFile.getAbsolutePath());
// if cascade file is loaded print
Log.d("facial_Expression","Classifier is loaded");
// check your code one more time
// select device and run
//I/MainActivity: OpenCv Is loaded
//D/facial_Expression: Model is loaded
//D/facial_Expression: Classifier is loaded
// Next video we will predict face in frame.
// cropped frame is then pass through interpreter which will return facial
expression/emotion

}
catch (IOException e){
e.printStackTrace();
}

}
// Before watching this video please watch my previous video :
//Facial Expression Or Emotion Recognition Android App Using TFLite (GPU) and
OpenCV:Load Model Part 2
// Let's start
// Create a new function
// input and output are in Mat format
// call this in onCameraframe of CameraActivity
public Mat recognizeImage(Mat mat_image){
// before predicting
// our image is not properly align
// we have to rotate it by 90 degree for proper prediction
Core.flip(mat_image.t(),mat_image,1);// rotate mat_image by 90 degree
// start with our process
// convert mat_image to gray scale image
Mat grayscaleImage=new Mat();
Imgproc.cvtColor(mat_image,grayscaleImage,Imgproc.COLOR_RGBA2GRAY);
// set height and width
height=grayscaleImage.height();
width=grayscaleImage.width();

// define minimum height of face in original image


// below this size no face in original image will show
int absoluteFaceSize=(int)(height*0.1);
// now create MatofRect to store face
MatOfRect faces=new MatOfRect();
// check if cascadeClassifier is loaded or not
if(cascadeClassifier !=null){
// detect face in frame
// input output
cascadeClassifier.detectMultiScale(grayscaleImage,faces,1.1,2,2,
new Size(absoluteFaceSize,absoluteFaceSize),new Size());
// minimum size
}

// now convert it to array


Rect[] faceArray=faces.toArray();
// loop through each face
for (int i=0;i<faceArray.length;i++){
// if you want to draw rectangle around face
// input/output starting point ending point color R G B alpha
thickness
Imgproc.rectangle(mat_image,faceArray[i].tl(),faceArray[i].br(),new
Scalar(0,255,0,255),2);
// now crop face from original frame and grayscaleImage
// starting x coordinate starting y coordinate
Rect roi=new Rect((int)faceArray[i].tl().x,(int)faceArray[i].tl().y,
((int)faceArray[i].br().x)-(int)(faceArray[i].tl().x),
((int)faceArray[i].br().y)-(int)(faceArray[i].tl().y));
// it's very important check one more time
Mat cropped_rgba=new Mat(mat_image,roi);//
// now convert cropped_rgba to bitmap
Bitmap bitmap=null;

bitmap=Bitmap.createBitmap(cropped_rgba.cols(),cropped_rgba.rows(),Bitmap.Config
.ARGB_8888);
Utils.matToBitmap(cropped_rgba,bitmap);
// resize bitmap to (48,48)
Bitmap scaledBitmap=Bitmap.createScaledBitmap(bitmap,48,48,false);
// now convert scaledBitmap to byteBuffer
ByteBuffer byteBuffer=convertBitmapToByteBuffer(scaledBitmap);
// now create an object to hold output
float[][] emotion=new float[1][1];
//now predict with bytebuffer as an input and emotion as an output
interpreter.run(byteBuffer,emotion);
// if emotion is recognize print value of it
// define float value of emotion
float emotion_v=(float)Array.get(Array.get(emotion,0),0);
Log.d("facial_expression","Output: "+ emotion_v);
// create a function that return text emotion
String emotion_s=get_emotion_text(emotion_v);
// now put text on original frame(mat_image)
// input/output text: Angry (2.934234)
Imgproc.putText(mat_image,emotion_s+" ("+emotion_v+")",
new Point((int)faceArray[i].tl().x+10,(int)faceArray[i].tl().y+20),
1,1.5,new Scalar(0,0,255,150),2);
// use to scale text color R G B alpha thickness

// select device and run


// Everything is working fine
// Remember to try other model
// If you want me to improve model comment below
// This model had average accuracy it can be improve
// Bye

// after prediction
// rotate mat_image -90 degree
Core.flip(mat_image.t(),mat_image,0);
return mat_image;
}

private String get_emotion_text(float emotion_v) {


// create an empty string
String val="";
// use if statement to determine val
// You can change starting value and ending value to get better result
// Like

if(emotion_v>=0 & emotion_v<0.5){


val="Surprise";
}
else if(emotion_v>=0.5 & emotion_v <1.5){
val="Fear";
}
else if(emotion_v>=1.5 & emotion_v <2.5){
val="Angry";
}
else if(emotion_v>=2.5 & emotion_v <3.5){
val="Neutral";
}
else if(emotion_v>=3.5 & emotion_v <4.5){
val="Sad";
}
else if(emotion_v>=4.5 & emotion_v <5.5){
val="Disgust";
}
else {
val="Happy";
}
return val;
}

private ByteBuffer convertBitmapToByteBuffer(Bitmap scaledBitmap) {


ByteBuffer byteBuffer;
int size_image=INPUT_SIZE;//48

byteBuffer=ByteBuffer.allocateDirect(4*1*size_image*size_image*3);
// 4 is multiplied for float input
// 3 is multiplied for rgb
byteBuffer.order(ByteOrder.nativeOrder());
int[] intValues=new int[size_image*size_image];

scaledBitmap.getPixels(intValues,0,scaledBitmap.getWidth(),0,0,scaledBitmap.getWid
th(),scaledBitmap.getHeight());
int pixel=0;
for(int i =0;i<size_image;++i){
for(int j=0;j<size_image;++j){
final int val=intValues[pixel++];
// now put float value to bytebuffer
// scale image to convert image from 0-255 to 0-1
byteBuffer.putFloat((((val>>16)&0xFF))/255.0f);
byteBuffer.putFloat((((val>>8)&0xFF))/255.0f);
byteBuffer.putFloat(((val & 0xFF))/255.0f);

}
}
return byteBuffer;
// check one more time it is important else you will get error
}

private MappedByteBuffer loadModelFile(AssetManager assetManager, String


modelPath) throws IOException{
// this will give description of file
AssetFileDescriptor assetFileDescriptor=assetManager.openFd(modelPath);
// create a inputsteam to read file
FileInputStream inputStream=new
FileInputStream(assetFileDescriptor.getFileDescriptor());
FileChannel fileChannel=inputStream.getChannel();

long startOffset=assetFileDescriptor.getStartOffset();
long declaredLength=assetFileDescriptor.getDeclaredLength();
return
fileChannel.map(FileChannel.MapMode.READ_ONLY,startOffset,declaredLength);

}
Advantages:
1. Improved User Experience: Face detection apps can enhance user experience by offering intuitive
and personalized interactions. For example, apps can use face detection to automatically adjust
settings, customize content, or provide tailored recommendations based on the user's facial features.

2. Enhanced Security: Face detection can be used as a biometric authentication method, providing a
more secure and convenient way to unlock devices, access sensitive information, or perform secure
transactions. It offers an additional layer of security compared to traditional password-based
authentication.
3. Photography and Image Editing: Face detection can be utilized in photography and image editing
apps to identify and track faces in photos, enabling features such as automatic focus and exposure
adjustment, facial recognition, and applying filters or effects specifically on detected faces.

4. Social Media and Entertainment: Face detection is widely used in social media platforms for
various purposes, including automatic tagging of people in photos, applying filters or augmented
reality effects to faces in real-time, and creating personalized video or image content based on facial
expressions.

5. Marketing and Advertising: Face detection apps can be utilized in marketing and advertising
campaigns to gather demographic data and analyze customer reactions. This information can help
businesses tailor their products, advertisements, and user experiences to specific target audiences.

Disadvantages:
1. Privacy Concerns: Similar to face detection apps, face emotion detection apps raise privacy
concerns. The analysis of facial expressions and emotions involves capturing sensitive data, requiring
careful consideration of privacy regulations and ensuring proper consent from users.

2. Accuracy and Bias: Face emotion detection algorithms may not always accurately interpret facial
expressions, leading to misinterpretation of emotions. Moreover, these algorithms can exhibit biases,
especially when dealing with diverse demographics, cultural differences, or individuals with atypical
expressions.

3. Emotional Complexity: Facial expressions and emotions are complex and context-dependent. Face
emotion detection apps may not fully capture the subtleties or nuances of emotions, potentially
leading to oversimplification or misinterpretation of the emotional state.

4. Ethical Considerations: Face emotion detection apps need to address ethical considerations,
including responsible use of emotional data, ensuring transparency in data handling, and guarding
against potential misuse or unauthorized access.

Applications:

1. Privacy Concerns: Similar to face detection apps, face emotion detection apps raise privacy
concerns. The analysis of facial expressions and emotions involves capturing sensitive data,
requiring careful consideration of privacy regulations and ensuring proper consent from users.
2. Accuracy and Bias: Face emotion detection algorithms may not always accurately interpret facial
expressions, leading to misinterpretation of emotions. Moreover, these algorithms can exhibit biases,
especially when dealing with diverse demographics, cultural differences, or individuals with atypical
expressions.
3. Emotional Complexity: Facial expressions and emotions are complex and context-dependent. Face
emotion detection apps may not fully capture the subtleties or nuances of emotions, potentially
leading to oversimplification or misinterpretation of the emotional state.
4. Ethical Considerations: Face emotion detection apps need to address ethical considerations,
including responsible use of emotional data, ensuring transparency in data handling, and guarding
against potential misuse or unauthorized access.
References:

1. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of
Personality and Social Psychology, 17(2), 124-129.

2. Bartlett, M. S., Littlewort, G. C., Frank, M. G., Lainscsek, C., Fasel, I., & Movellan, J. R. (2005).
Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6), 22-
35.

3. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.

4. Liu, J., Luo, S., Yu, Z., Chen, M., Zhou, M., & Cui, X. (2020). A deep learning model for facial
emotion recognition based on extended data samples. Sensors, 20(2), 483.

5. Mollahosseini, A., Hasani, B., & Mahoor, M. H. (2017). AffectNet: A database for facial expression,
valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1), 18-
31.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy