10 1109@iciccs48265 2020 9121163
10 1109@iciccs48265 2020 9121163
Abstract - This project ai ms to recognize faces in an single image. Before facial recognition can be carried out, we
image, vi deo, or vi a li ve camera using a deep learni ng- must first ensure the presence of a face in the frame. This can
based Convolutional Neural Network model that is fast as be done by performing face detection. In this step, the model
well as accurate. Face recog nition is a process of detects the face and separates it from the image for
identifying faces in an i mage and has practical identification, eliminating redundant data that is not required
applications in a variety of domai ns, including information for facial recognition. This reduces the number of pixels on
security, biometrics, access control, l aw enforcement, which the model has to work on and hence increasing the
smart cards, and surveillance system. Deep Learning uses overall efficiency.
numerous layers to discover interpretati ons of data at However, facial recognition [12] also faces some problems,
di fferent extraction levels. It has improved the l andscape making it very hard to perform. Various factors like pose
for performing research in facial recogniti on. The state-of- variation, facial hair, image illu mination, image background ,
the-art i mplementation has been bettered by the and facial expressions affect the image, and the outcome can
introducti on of deep learning in face recognition and has differ based on these characteristics. In situations where the
stimulated success in practical applications. Convolutional face is not visible or hidden from the camera, the face might
neural networks, a kind of deep neural network model has not even be detected. Thus, the image used as input to the
been proven to achieve success in the face recognition model could be in d ifferent conditions as opposed to the image,
domain. For real-ti me systems, sampling must be done which is to be examined.
before using CNNs. On the other hand, complete i mages We are attempting to instill this technique in universities,
(all the pixel values) are passed as the input to so that common issues like h igher time-consumption during
Convolutional Neural Networks . The following steps: attendance, students marking pro xy attendance and mass bunks
feature selection, feature extracti on, and training are during lectures can be prevented. Marking attendance in
performed in each step. This might lead to the assumption, classes is an overwhelming task for the professors as it is not
where convolutional neural network implementation has a only time consuming, but also the students tend to mark pro xy
chance to get complicated and time-consuming. attendance, which leads to inaccurate records of attendance.
Manual attendance is certainly tough for the professors , as it
Keywords – Face detection, face recognition, deep makes it difficult to maintain a record of the students. The
learning, convolutional neural networks conventional ways often have their difficu lties. The majority of
these methods lack dependability. It leads to an increasing need
for better methods of attendance. This research stresses on
using facial recognition as a technique for marking attendance.
I. INT RODUCT ION Real-time automated attendance monitoring without wasting
Face recognition is a unique technique for performing teacher’s precious time is the main objective of this project.
authentication biometrically. It has broad applications in areas Not only does this method save time, but it is also more
of finance, security, and military. Face recognition has gained a reliable than traditional methods .
lot of interest in the last few years, which has led several In the next section, we discuss some of the facial
researchers to work for developing new techniques and recognition algorithms, assessing their pros and cons. In
improve the existing ones. Its wide range of applications Section III, we present our proposed model and briefly exp lain
appeals to researchers and keeps them driven. Face recognition how it works. The experimental results obtained after testing
can be performed on a real-time v ideo by considering it as a the model are given in Section IV. Section V concludes the
sequence of frames where each frame is considered to be a paper with a summary.
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS 2020)
IEEE Xplore Part Number:CFP20K74-ART; ISBN: 978-1-7281-4876-2
Figure 1: Flow-chart showing the working of the facial recognition aspect of our paper
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS 2020)
IEEE Xplore Part Number:CFP20K74-ART; ISBN: 978-1-7281-4876-2
i. Principal Co mponent Analysis(PCA) [6] method A video camera placed in the classroom would
which uses the concept of Eigenfaces, with an continually record the class and pass the input stream
accuracy of 71.15%. The co mputations required in
this method is much lower as compared to the other to the attendance system.
methods as it only considers the 2D face recognition At regular intervals, individual frames are analyzed by
problem. Thus the complexity is reduced by a the model.
substantial amount.
ii. Linear Discriminant Analysis(LDA) [7] method uses All the faces of the students for a particular frame
Fisherfaces, with an accuracy of 77.90%. It mainly
would be detected and, in turn, recognized by our
focuses on reducing the number of features being
model.
applied on each face.
iii. Local Binary Pattern(LBP) [8] has an accuracy of The same process would be followed for each frame
82.94%. for real-time video analysis and facial recognition.
iv. Gabor Classifier [9], which takes into consideration At the end of a class, after all the frames are done, the
the local features, has an accuracy of 92.35%. It is not model would mark the students as absent or present
designed specifically for face recognition, but its based on the model's results.
filters can recognize various prominent features in an The student must be recognized in at least 60 percent
object. of the frames to be marked present.
The database would update the information on
Convolutional Neural Networks [11] are a type of Neural marking the attendance of all the students for that
particular date.
Network that is mostly used in the field of image classification,
The database also stores the attendance record for all
particularly Face Recognition. Convolutional Neural Networks dates. Therefore it will also make it easier to manage
take an input image and tweak the weights of the network- the attendance records.
based of the input image so that it can differentiate it fro m
other images. This allows the network to learn and identify the Pre-processing
important characteristics (that are essential for recognizing
Our model is based on deep learning-based facial
different faces) on its own. The need for human supervision is
recognition. We have used the face_recognition library that is a
thus min imized- it can automatically differentiate the images pre-trained network and uses it to generate 128-d vectors from
into separate classes. Convolutional Neural Networks also the training dataset. Pre-processing involves the following
reduce the need for pre-processing required to train the model, steps:
thus it utilizes less computation power. Due to these
advantages, deep learning algorithms like convolutional neural Step 1: Face Detection
networks have become the standard in facial recognition.
Convert the given image to grayscale.
III. PROPOSED W ORK Apply Haar features to each image by dividing the
image into smaller squares and detect the presence of
We have proposed a method for an automated attendance different features such as edges, corners , etc.
system using facial recognition. The system should be able to We obtain a basic structure of the image, representing
detect faces in each frame of a real-time video. Further, after the obtained features.
recognizing the detected faces, it should be able to mark the We can compare this structure to a previously
attendance of students whose faces are recognized by the extracted pattern of a face. This helps us in identifying
system. The cru x of the system is that it marks the attendance the different faces present in the image.
of only those students who have attended more than sixty
Step 2: Resolving the issue of projecting faces
percent of the total class; the rest of the students are marked
absent. Although we have isolated faces, the computer
mistakes a face looking in different directions as
The proposed system requires a video camera in the class to
different faces.
be an initial requirement. The proposed system is designed to To resolve this issue, we need to alter the positioning
work on video footage of students. The underlying idea is to of a face in an image.
extract features of students’ faces from the footage, and We take some points that lie on a face and using these
compare these features with those which are extracted fro m the points, we manage to detect a face, its boundary, the
training images used for training the model. If these features positions of eyes, nose, and lips from an input image.
match, the student is marked present for that single frame.
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS 2020)
IEEE Xplore Part Number:CFP20K74-ART; ISBN: 978-1-7281-4876-2
Next, the image is rotated such that the transformed i.e., to detect and recognize the faces in the input data, is shown
face in the rotated image is as close as possible to a in figure 3. The only step left is to mark the attendance of
perfectly centered face. students whose faces are classified in the previous step.
Using this method, the computer does not categorize
the projections of a face as different faces.
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS 2020)
IEEE Xplore Part Number:CFP20K74-ART; ISBN: 978-1-7281-4876-2
number of images per person required for our model to provide Further, the pickle file generated is used in the model to
the best result, we trained and tested the model with a d ifferent recognize the faces, instead of training the model every time
number of images per person. Starting fro m a single image of the model is given an input.
each person, we performed the testing with as many as 25 We analyzed our model both quantitatively as well as
images per person. The images of every subject differ in their quantitatively. We first assessed the number of faces the model
facial expressions, contrast, exposure, configuration, etc. We was able to detect, i.e., quantitative analysis, when given real-
implemented the model on a 64-bit system using Python 3.6.9. time video input, and then measured the accuracy of the model
For the pre-processing of input images, we have used the by calculating the number of faces that were recognized
OpenCV package using Haar cascade and its frontal face correctly, i.e., qualitative analysis.
feature.
Also, the result of our framework, when stored in the
database is sorted according to the dates on which the lecture
has taken place. Initially, all the students for a lecture are
marked absent and on recognizing their faces, and if they
follow certain criteria mentioned above, the model updates
their attendance in the database marking them present for the
lecture. A snapshot of a part of a database, showing the names
of only five students, is shown in Fig. 5. We have used
WampServer for our MySQL database.
Case 2 3 13 9 69.23
Case 3 5 13 10 76.92
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS 2020)
IEEE Xplore Part Number:CFP20K74-ART; ISBN: 978-1-7281-4876-2
Authorized licensed use limited to: University of Exeter. Downloaded on June 21,2020 at 15:05:08 UTC from IEEE Xplore. Restrictions apply.