0% found this document useful (0 votes)
22 views6 pages

5 Paper CICT2020 D108

This document presents a comparative study of various face recognition techniques, focusing on statistical methods like Eigen-faces, Fisher-faces, and Local Binary Patterns Histograms (LBPH), as well as artificial neural networks. It highlights the effectiveness and efficiency of these techniques using real-database images, while also addressing challenges such as pose variation and image quality. The study aims to provide insights for the face recognition community and suggests directions for future research and improvements in the field.

Uploaded by

Nawaf Alsrehin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views6 pages

5 Paper CICT2020 D108

This document presents a comparative study of various face recognition techniques, focusing on statistical methods like Eigen-faces, Fisher-faces, and Local Binary Patterns Histograms (LBPH), as well as artificial neural networks. It highlights the effectiveness and efficiency of these techniques using real-database images, while also addressing challenges such as pose variation and image quality. The study aims to provide insights for the face recognition community and suggests directions for future research and improvements in the field.

Uploaded by

Nawaf Alsrehin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2020 3rd International Conference on Information and Computer Technologies (ICICT)

Face Recognition Techniques using Statistical and Artificial Neural Network: A


Comparative Study

Nawaf O. Alsrehin Mu’tasem A. Al-Taamneh


Information Systems Department Information Systems Department
Faculty of Information Technology and Computer Sciences Faculty of Information Technology and Computer Sciences
Yarmouk University, Irbid, Jordan Yarmouk University, Irbid, Jordan
n_alsrehin@yu.edu.jo 2015930014@ses.yu.edu.jo

Abstract—Face recognition is the process of identifying a from a digital image or a live capture with the stored face for
person by their facial characteristics from a digital image or a that person.
video frame. Face recognition has extensive applications and Human can recognize faces on a daily basis, developing
there will be a massive development in future technologies. machines that imitate this human natural capability is
The main contribution of this research is to perform a important in many application fields and has become an
comparative study between different statistical-based face active research area in the community of computer vision,
recognition techniques, namely: Eigen-faces, Fisher-faces, and pattern recognition, and image processing such as face
Local Binary Patterns Histograms (LBPH) to measure their recognition and identity verification [2] [3], face tracking
effectiveness and efficiency using real-database images. These
from video surveillance [4], facial expression, gender, and
recognizers still used on top of commercial face recognition
age recognition, 3D facial shape reconstruction [5], Criminal
products. Additionally, this research is comprehensively
comparing 17 face-recognition techniques adopted in research identification [6], and healthcare [7].
and industry that use artificial-neural network, criticize and Masupha et. al. [8] classified face recognition approaches
categories them into an understandable category. Also, this into feature based and holistic based techniques. Feature
research provides some directions and suggestions to overcome based techniques identify distinct local facial features, such
the direct and indirect issues for face recognition. It has found as mouth, eyes, and cheeks, as well as other facial points,
that there is no existing recognition method that the which is done by measuring the geometrical relationship
community of face recognition has agreed on and solves all the between these facial points. Facial image is then represented
issues that face the recognition, such as different pose variation, using geometrical feature vector and finally associate them
illumination, blurry and low-resolution images. This study is with a feature vector of known person. Feature based
important to the recognition communities, software companies, techniques are classified into geometric feature based and
and government security officials. It has a direct impact on Elastic bunch Graph [9]. Holistic (called appearance) based
drawing clear path for new face recognition propositions. This techniques use complete image information to identify faces.
study is one of the studies with respect to the size of its They are divided into statistical and artificial intelligence
reviewed approaches and techniques. techniques. Statistical based techniques extract statistical
values from the whole face image. These values are then
Keywords-face detection; face recognition; artificial compared with related templates or models to identify a
intelligent; artificial-neural network.
person from a database. Principal Component Analysis
I. INTRODUCTION AND BACKGROUND (PCA) and Linear Discriminative Analysis (LDA), Local
Binary Patterns (LBP) are examples of statistical approaches
Human face recognition is the process of identifying or [10] [9]. Artificial Intelligence techniques use concepts of
verifying a person from a digital image or a video frame machine learning to perform face recognition. Genetic
from a video source. The complete process of human face algorithms, Artificial Neural Networks (ANN), Hidden
recognition includes three stages: (1) face detection, (2) Markov Model (HMM), Support Vector Machine (SVM),
feature extraction, and (3) face classification, various and Radial Basic fuzzy (RBF) are examples of Artificial
techniques were developed for these three stages. Face Intelligence techniques. Figure 1 shows the general structure
detection is the process of locating and determining a of these techniques.
segment from a visual scene that includes a face, or parts of Despite the advancement progress in various aspects of
it. Face detection algorithms aimed to determine whether face detection and recognition areas, only a limited number
there is any face in an image or not. Viola-Jones can detect of comparison studies can be found that review and compare
faces in real-time with high accuracy [1]. Feature extraction the growing body of literature and focus on the statistical
is the process of extracting meaningful facial features that and artificial neural network algorithms and techniques
are required to efficiently determine and describe faces. Face adopted in these areas. Among those, techniques that use
classification is the last step in the face recognition process, artificial neural networks for face detection have been
and it is the process of labeling or matching a person's face reviewed in [11] while Manisha Kasar et. al. in [12]
reviewed the approaches that use artificial neural networks

978-1-7281-7283-5/20/$31.00 ©2020 IEEE 154


DOI 10.1109/ICICT50521.2020.00032
for face recognition. This paper investigates the latest image is different from the training image. The authors also
techniques that use artificial neural network for face showed that the Fisherface algorithm can recognize faces in
recognition and puts forward a hierarchical architecture that noisy and blurry images. However, it fails in recognition
summarizes and classifies these techniques. This paper does faces in images that have small scales and different poses.
not follow the typical one-to-one comparison; instead it Providing images with better scale can overcome the scaling
provides a structure-based that classifies the state-of-the-art problem and using more training images with various poses
research into groups for improving face recognition using to overcome the blurry problem.
artificial neural networks. More than 30 research articles Nikolaos Stekas et. al. in [15] designed face recognition
published between 2010 and 2018 were collected and implementation on a System on Chip (SoC), integrated with
reviewed from the leading journals and conferences in these an FPGA. This implementation utilizes LBPH to extract
areas. features from test face images and then used Manhattan
distance to retrieve the correct match from the system’s face
dataset.
Saket Karve et. al. in [16] proposed a statistical feature-
based approach for face recognition. In addition, the authors
provided a detailed study of various feature extraction
techniques involving principal and independent component
analysis. The comparison was done based on accuracy
measurements and execution time. Experimental results
showed that the factor analysis method outperforms principal
component analysis and independent component analysis in
terms of accuracy.
Chawla and Trivedi in [17] compared ADABOOST,
Figure 1. A general structure of face recognition techniques. PCA, LDA, and EBGM algorithms for face recognition. The
evaluation criteria are based on origin, success rate, eigen
BalaYesu et. al. in [9] compared PCA, Kernal (KPCA), values, and vector score as output of an image. Comparison
LDA, and Kernel Fisher Analysis (KFA) face recognition results showed that the PCA achieves the highest success
techniques. The evaluation results using ORL database rate and it is suitable for larger scale area to smaller dataset
showed that the Gabor PCA, Gabor KPCA and Gabor LDA with complete information; however, it is not suitable for
techniques achieved 100% accuracy when the number of moving scenes. LDA method has linear projection from the
training images is 5. Also, the authors showed that using five image space to a low-dimensional space. But the major
images as a training set reduces the Equal Error Rate. drawback when applying LDA is the small sample size.
Prakash et. al. in [13] compared PCA, KPCA, and Kernel Alamri and Hausawi in [18] compared the performance
Fischer Discriminant Analysis (KFDA) using ORL, Yale, of four different algorithms, namely LBP, LDA, PCA, Scale
and UMIST face datasets. Evaluation results showed that Invariant Feature Transformation (SIFT). The comparison
when the number of training images is increased, the first was done based on four performance curves: Receiver
rank recognition rate is increased. In addition, KFA Operating Characteristics (ROC), Precision Recall (PR),
algorithm achieved the best results since it functions Detection Error Tradeoff (DET), and cumulative
efficiently in both linear and non-linear image subspace. match curve (CMC), using three different-size sets. LBP and
Anggo and Arapu in [14] developed an application that SIFT achieve the best reasonable results through the
uses Fisherface to recognize faces. Evaluation results experiments. Table 1 shows the overall comparison between
showed that Fisherface achieves 100% recognition accuracy different recognizing techniques recently compared by
when the test image is the same as the training image and researchers.
able to recognize the image correctly with 93% when the test
TABLE I. OVERALL COMPARISON BETWEEN DIFFERENT RECOGNIZING TECHNIQUES RECENTLY USED ALONG WITH THE RECOGNITION ALGORITHM.
PCA Gabor KPCA Gabor LDA AdaBoost Gabor KLDA Gabor Fisher Eigen LBPH LBP PCA with LGBP EBGM FTC EVA ANN ICA SIFT
PCA KPCA LDA LDA KLDA Face Face KNN
[9] √ √ √ √ √ √ √ √
[19] √ √
[14] √
[20] √ √
[21] √ √ √ √
[22] √ √ √
[23] √ √ √ √ √ √ √
[15] √
[24] √ √
[17] √ √ √ √
[25] √ √ √ √
[18] √ √ √ √

155
II. STATISTICAL TECHNIQUES 1, otherwise 0. After reading these 0/1 values under the 3×3
window in a clockwise order, a local binary pattern like
A. Eigenfaces Face Recognition Algorithm
11100011 is generated. When finishing doing this
Eigenfaces algorithm uses the appearance-based conversion on the whole image, a list of local binary patterns
approach to recognize faces. It assumes that not all face parts will be generated. After that, converting each binary pattern
are equally important or useful for face recognition. It uses into a decimal number using binary to decimal conversion
the natural capabilities of human for comparing people and and then a histogram of all those decimal values will be
differentiates a face from another. Human can recognize any generated at the end.
person by detecting distinct features in the face, like nose,
eyes, forehead, or cheeks, and the location and relation D. Comparison Between Different Statistical Techniques
between a part and another. It captures the variation in a To compare the above algorithms and find the one that
collection of face images and catches the maximum variation generates the most accurate results using frontal-view faces,
among different face parts and uses this information to we used three different datasets of face images for training
encode and compare images of individual faces in a holistic and testing. These datasets are: (1) the Cohn-Kanade AU-
manner [26]. For example, from eyes to nose there is a Coded Facial Expression Dataset [28], (2) Glasgow
reasonable change, and the same applies from nose to mouth. Unfamiliar Face Dataset (GUFD), and (3) AT&T dataset
In other words, it focuses on the areas of a maximum change. from AT&T Laboratories Cambridge. The Cohn-Kanade
Eigenfaces algorithm uses a set of eigenvectors from the AU-Coded Facial Expression Dataset, which referred to as
principal components of the face image, which are derived CK, includes 486 different expression sets from 123
from the covariance matrix of the set of face images that university students, each sequence begins with neutral
have high-dimensional vector space. This reduces the expression and proceeds to a peak expression. The emotion
dimension of the representation of the original training label refers to what expression was requested rather than
images. Among many recognizer approaches, Eigenface is what may have been performed, the subjects ranged in age
considered to be the first working facial recognition from 18 to 30 years, 65% were female, 15% were African
technology and served as the basis for one of the top American, and 3% were Asian or Latino. Subjects were
commercial face recognition technology products and it is instructed by an experimenter to perform a series of facial
still considered as a standard for comparison to demonstrate displays. Each subject has at least 12 images with different
the expected performance of any new developed recognizer facial expression. We used 1587 test images that belong to
approaches [27]. 110 different persons from the total of 123 persons. Figure 2
shows sample of these faces and how each subject is defined
B. Fisherface Face Recognition Algorithm
by a serial number used in the first part of images name. We
Fisherface face recognition algorithm is an adjusted used 60% of the images for each subject for training and 40%
version of the Eigenfaces algorithm that eliminates the for testing.
consideration of dominant face part as part for representation.
Fisherface method is based on the reduction of face space
dimension using PCA method, which differentiate one
person from others, then apply the Fisher's Linear
Discriminant (FDL) method, known as LDA method, to
(a)
obtain feature of image characteristic. After that, it uses the
minimum Euclidean for identification or matching face
image. LDA maximizes the ratio of between-classes to
within-classes scatter, instead of maximizing the overall
scatter; it treats image data as a vector somewhere in a high-
dimensional image space. One thing to note here is that (b)
Fisherface only prevents features of one person from
Figure 2. Sample of the CK image database, (a) different facial
becoming dominant, but it still considers illumination expressions for the same person, and (b) different persons.
changes as a useful feature.
For the GUFD dataset, we used images from 100
C. Local Binary Patterns Histograms (LBPH) Face subjects; each subject has 7-20 colored images; these images
Recognition Algorithm were taken from people who are away from the camera.
The idea with LBPH is not to deal with the image as a Figure 3 shows samples of these images. For the AT&T
whole unit, but instead, it tries to find the local structure of dataset, we used images from 40 subjects; each subject has 6
images by comparing each pixel with its neighboring pixels. to 12 grayscale images; these images were taken from people
The comparison process starts by moving a window across with some obstacles such as wearing glasses or closed eyes,
the image and at each move (each local part of the picture), facial deflection, open mouth etc. Figure 4 shows samples of
the pixel’s intensity value located at the center compared these images.
with its surrounding pixels; if the neighbor’s value is less We focused on a frontal face images in which the whole
than or equal to the center’s value, the neighbor value will be face appears in the image, two eyes, two ears, nose, and
mouth. In addition, this research handles face occlusion,

156
different facial expression, and different imaging conditions. the Fisherface algorithm achieves the highest accuracy using
Moreover, this research works with both gray scale and the GUFD dataset while the LBPH algorithm achieves the
colored images. highest accuracy using the AT&T dataset. The LBPH
We applied the Haar technique [11] as face detection and algorithm needs the lowest execution time using the GUFD
combine it with all the above three recognition techniques. dataset while the Fisherface algorithm needs the lowest
While there are various commercial and non-commercial execution time using the AT&T dataset.
algorithms of face recognition, this research focuses on the
0.945 0.964
above recognizer techniques that are most used by the 1 0.9
commercial sector, freely available (open source), and most 0.9
0.8
popular. 0.7 0.543
0.6 0.487 0.486
0.5
0.4
0.3
LBPH EigenFaces FisherFaces
GUFD AT&T
(a) Accuracy

5.00
(a) 4.00 2.88 2.98
2.39
3.00
2.00 1.17 1.25 0.96
1.00
0.00
LBPH EigenFaces FisherFaces
GUFD AT&T
(b)
Figure 3. Sample of the GUFD image database; (a) different (b) Normalized time in seconds
subjects with different distances from the camera, (b) different Figure 5. The evaluation results of the face recognition algorithms.
expressions for the same person and different persons.
III. ARTIFICIAL NEURAL NETWORK
Jiaolong Yang et. al. in [29] presented a Neural
Aggregation Network (NAN) for video face recognition. The
algorithm maps a set of a variable number of face images of
a person as input and produces a compact, fixed-dimension
feature representation for recognition as output, it also
supports high-quality face images and achieves good results
for low-quality images that suffer from blurry and
improperly exposed faces. Experimental results on IJB-A,
Figure 4. Sample of the AT&T image database; (a) different subjects YouTube Face, Celebrity-1000 video datasets showed that
with different obstacles for different persons. the NAN algorithm consistently outperforms naive
aggregation methods and achieves the state-of-the-art
To compare these recognizers, we used Python (version accuracy. Table 2 summarizes the comparison results of the
3.6.3 Oct. 2017) as a development environment along with latest artificial neural network approaches for face
OpenCV plug-in library to implement the face detection and recognitions used in this study.
recognition phases and build a complete application with an
appropriate GUI for users to use and interact with it. In the IV. DISCUSSION AND CONCLUSION
training phase, the images were selected one by one and fed
up to the recognizer. In the comparison phase and for each This paper presents a comparison between different
algorithm, we calculated the accuracy as follows: statistical-based face recognition techniques, namely: Eigen-
faces, Fisher-faces, and Local Binary Patterns Histograms
= (LBPH). It shows that statistical techniques work efficiently
in terms of accuracy and performance and there is a clear
where represents the number of test images that
gap between them in terms of execution time. The above
correctly annotated while represents the total number of experimental results indicate that there are many discussion
test images. In addition to that, we calculated the time aspects which can be adopted such as evaluating the
needed to complete the training phase for all the recognition accuracy factor of recognition in identical environment for
algorithms in minutes using the same number of images. input images, addressing the impact of execution time on
Figure 6 shows the comparison results. Figure 6(a) shows completing the recognition process, memory usage cost for
the accuracy and Figure 5(b) shows the normalized the recognition techniques. Moreover, the results can be used
execution time in seconds. The evaluation results show that for determining the best recognition algorithm, ranking, or

157
decision levels. However, this paper focuses on evaluating reasonable results produced through the experiments are for
the accuracy of the selected algorithms along with the cost in the LBPH recognizer regarding the selected (accuracy &
terms of execution time. From the analysis results described execution time) evaluation aspects. Moreover, it is obvious
above, we found that the performance accuracy seems very that the relation between Fisherfaces and Eigenfaces in both
good for Fisherfaces and Eigenfaces techniques and an evaluation criteria is close to the implementation aspects
acceptable result (90%) for the LBPH recognizer. The best shared by them.
TABLE II. COMPARISON OF LATEST ARTIFICIAL NEURAL NETWORK APPROACHES FOR FACE RECOGNITIONS.

Ref. Algorithm (Definition) Data Base: Training & Testing Performance Benefits Limitation
[30] AIM extends from an CAFR: 1,446,500 images Evaluation results for AIM using x AIM represents a novel x The recognition rate
auto-encoder based annotated with age, identity, IJB-C dataset range from 0.826 approach that efficiently might change if the
Generative Adversarial gender, race and landmarks. to 96.2%. Using CACDVS data addresses the problem of how algorithm applied on
Network (GAN), and MORPH: 79897 images from set, result was 99.38. Using age variations will affect face small-scale face
consists of three 21084 subjects. CAFR dataset, result is 99.76, recognition. images database.
components: CACD: 163,446 from 2,000 Using FG-NET dataset, result is x In addition to the age gaps,
representation learning celebrities 93.20%. Using MORPH Album2 AIM support other complex
sub-net, face Synthesis FG-NET: 1,002 face images dataset, result is 98.81. Overall unconstrained distractors, such
sub-net, and robust facial from 82 non-celebrity subjects from 0.826 to 99.76 as background complexity and
representations. IJB-C: 31,334 images and lighting conditions
11,779 videos from 3,531 (illumination variance.)
subjects x AIM does not require paired
training data nor true age of
testing samples.
[31] Neural Learning Person’s face is captured from From 85% to 90%. The average low cost, self-contained x Focused only on
Approach (NLA): NLA the real time environment accuracy is 90%. characteristics and portability. static face
uses the extracted facial In addition, it can be recognition.
features with the neural extensively made to detect x It just recognizes
network to learn the animals, vehicles and all abiotic obstructions at knee
system to detect obstacles and biotic objects in the level
and recognize faces environments. x Certain hindrances
cannot be recognized
effectively
[33] Combining of LBPH, Yale: 15 different persons with 95.71% accuracy, which is The advantages of using the The authors have not
multi-KNN, and Back- 11 images per person with 165 comparable to state-of-the-art multi-LBPH descriptor and applied modern NN to
Propagation Neural face images as a total. methods with very small error KNN is that it generates a prove that higher
Network (BPNN) and ORL: 40 different persons with value. Overall leads to higher robust representation using the accuracy could be
considering the 10 different pictures for each accuracy. correlation between training generated even with
correlation between the one with 400 face images as a images, which minimizes both traditional features
training images without total. the amount of needed data and extraction and domain
using a common LFW: 13,322 images of human the training time. reduction methods
technique of image faces captured from the web using a correlated
density as an important using a Viola-Jones face training dataset
factor to enhance face detector, 5749 individuals, and between images.
recognition. A key 1680 of them have two or more
contribution is the images; the remaining
generation of a new set individuals have only one
called the T-Dataset from image.
the original training data Yale and ORL: 50% training
set, which is used to train and 50% testing, LFW: View
the BPNN to converge 1: 2200 as training and 1000 as
faster and achieve better testing.
accuracy.

Additionally, this paper reviews and compares latest face comparisons in this paper will provide a step toward
recognition techniques adopted in research and industry that impacting on increasing the performance of feature
use artificial neural network. It criticizes and categories them extraction techniques for face recognition systems. Future
into an understandable category. This study has found that work might consider different pose variation, illumination,
there is no existing recognition method that the community blurry and low-resolution images.
of face recognition has agreed on and solves all the issues
that face the recognition. To increase the performance of the REFERENCES
face recognition systems, extensive evaluations should be [1] P. Viola and M. J. Jones, "Robust Real-Time Face Detection,"
performed to allocate the best feature sets that should be International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154,
used for face recognition. However, state-of-the-art research 2004.
is insufficient yet to discover this set. Hopefully, the

158
[2] S. Zafeiriou, G. A. Atkinson, M. F. Hansen, W. A. P. Smith, V. International Journal of Innovative Research in Computer and
Argyriou and M. Petrou, "Face Recognition and Verification Using Communication Engineering, vol. 4, pp. 20582 - 20588, December
Photometric Stereo: The Photoface Database and a Comprehensive 2016.
Evaluation," IEEE Transactions on Information Forensics and [19] Z. Mahmood, T. Ali, S. Khattak and S. U. Khan, "A Comparative
Security, vol. 8, no. 1, pp. 121 - 135, 2013. Study of Baseline Algorithms of Face Recognition," in 12th
[3] M. Szczodrak and A. Czyżewski, "Evaluation of Face Detection International Conference on Frontiers of Information Technology,
Algorithms for the Bank Client Identity Verification," Foundations of Islamabad, Pakistan, 2014.
Computing and Decision Sciences, vol. 42, no. 2, p. 137–148, 16 Jun [20] S. Kukreja and R. Gupta, "Comparative Study of Different Face
2017. Recognition Techniques," in International Conference on
[4] F. P. Mahdi, M. M. Habib, M. A. R. Ahad, S. Mckeever, A. Computational Intelligence and Communication Networks, Gwalior,
Moslehuddin and P. Vasant, "Face recognition-based real-time India, 2011.
system for surveillance," Intelligent Decision Technologies, vol. 11, [21] P. Hurdale, S. Kamble and D. Chikhmurge, "Comparative Study of
no. 1, pp. 79-92, 2017. Face Recognition Techniques," International Journal of Engineering
[5] J. Jo, H. Kim and J. Kim, "3D facial shape reconstruction using Research & Technology (IJERT), vol. 3, pp. 926-928, March 2014.
macro- and micro-level features from high resolution facial images," [22] S. Jaiswal, D. (. S. S. Bhadauria and D. R. S. Jadon, "COMPARISON
Image and Vision Computing, vol. 64, pp. 1-9, August 2017. BETWEEN FACE RECOGNITION ALGORITHM-
[6] N. A. Abdullah, M. J. Saidi, N. H. A. Rahman, C. W. Chuah and I. r. EIGENFACES,FISHERFACES AND ELASTIC BUNCH GRAPH
A. Hamid, "Face recognition for criminal identification: An MATCHING," Journal of Global Research in Computer Science, vol.
implementation of principal component analysis for face 7, pp. 187-193, July 2011.
recognition," in The 2nd International Conference on Applied [23] Y.-S. Liu, W.-S. Ng and C.-W. Liu, "A Comparison of Different Face
Science and Technology (ICAST’17), Kedah, Malaysia, 2017. Recognition Algorithms," 2009.
[7] K. C. Nwosu, "Mobile Facial Recognition System for Patient [24] M. U. Rahman, "A comparative study on face recognition techniques
Identification in Medical Emergencies for Developing Economies," and neural network," 2013.
Journal for the Advancement of Developing Economies , vol. 5, no. 5,
pp. 63-72, 2016. [25] F. Arshad, S. Marghoob and Z. Ahmed, "A Comparative Study on
Face Recognition Techniques," International Journal of Advance
[8] L. Masupha, T. Zuva, S. Ngwira and O. Esan, "Face Recognition Foundation and Research in Computer, vol. 1, pp. 90-102, July 2014.
Techniques, their Advantages, Disadvantages and Performance
Evaluation," in International Conference on Computing, [26] P. Belhumeur, J. Hespanha and D. Kriegman, "Eigenfaces vs.
Communication and Security (ICCCS), Pamplemousses, Mauritius, Fisherfaces: recognition using class specific linear projection," IEEE
2015. Transactions on Pattern Analysis and Machine Intelligence, vol. 19,
no. 7, pp. 711 - 720, 1997.
[9] N. B. Y. a. H. K. Kalluri, "COMPARATIVE STUDY OF FACE
RECOGNITION TECHNIQUES," International Journal of Pure and [27] Y. Tayal, P. K. Pandey and D. B. V. Singh, "Face Recognition using
Applied Mathematic, vol. 120, no. 6, pp. 3527-3536, June 12, 2018. Eigenface," International Journal of Emerging Technologies in
Computational and Applied Sciences (IJETCAS), pp. 50-55, 2013.
[10] R. Rouhi, M. Amiri and B. Irannejad, "A Review on Feature
[28] T. Kanade, J. F. Cohn and Y. Tian, "Comprehensive Database for
Extraction Techniques in Face Recognition," Signal & Image
Facial Expression Analysis," in The Fourth IEEE International
Processing : An International Journal (SIPIJ), vol. 3, no. 6,
December 2012. Conference on Automatic Face and Gesture Recognition , Grenoble,
France, 2000.
[11] O. N. A. AL-Allaf, "REVIEW OF FACE DETECTION SYSTEMS
[29] J. Yang, P. Ren, D. Zhang, D. Chen, F. Wen, H. Li and G. Hua,
BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS,"
"Neural Aggregation Network for Video Face Recognition," in The
The International Journal of Multimedia & Its Applications, vol. 6,
February 2014. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), Honolulu, Hawaii, July 2017.
[12] M. M. Kasar, D. Bhattacharyya and T.-H. Kim, "Face Recognition
Using Neural Network: A Review," International Journal of Security [30] J. Zhao, Y. Cheng, Y. Cheng, Y. Yang, H. Lan, F. Zhao, L. Xiong, Y.
and its Applications, vol. 10, pp. 81-100, March 2016. Xu, J. Li, S. Pranata, S. Shen, J. Xing, H. Liu, S. Yan and J. Feng,
"Look Across Elapse: Disentangled Representation Learning and
[13] N. S. Prakash, P. Shetty, K. Kedilaya, Nithesh and S. B. N, Photorealistic Cross-Age Face Synthesis for Age-Invariant Face
"ComparativeStudy of PCA, KPCA, KFAand LDAAlgorithms for Recognition," Cornell University - Computer Science - Computer
Face Recognition," International Research Journal of Engineering Vision and Pattern Recognition - arXiv:1809.00338v2, 2018.
and Technology (IRJET) , vol. 5, no. 3, pp. 3460-3464, 2018.
[31] P. M. Kumar, U. Gandhi, R. Varatharajan, G. Manogaran, J. R. and T.
[14] M. Anggo and L. Arapu, "Face Recognition Using Fisherface Vadivel, "Intelligent face recognition and navigation system using
Method," Journal of Physics, vol. 1028, 2018. neural learning for smart security in Internet of Things," Springer
[15] N. S. a. D. v. d. Heuvel, "Face recognition using Local Binary Science + Business Media: Cluster Comput, p. 1–12, 2017.
Patterns Histograms (LBPH) on an FPGA-based System on Chip [32] A. S. a. D. P. Agarwal, "Face Recognition Using Artificial Neural
(SoC)," in IEEE International Parallel and Distributed Processing Networks," International Journal of Computer Science and
Symposium Workshops, Chicago, IL, USA, 23-27 May 2016. Information Technologies, vol. 7, pp. 896-899, 2016.
[16] S. Karve, V. Shende and R. Ahmed, "A comparative analysis of [33] M. Abuzneid and A. Mahmood, "Enhanced Human Face Recognition
feature extraction techniques for face recognition," in International Using LBPH Descriptor, Multi-KNN, and Back-Propagation Neural
Conference on Communication, Information & Computing Network," IEEE Access, vol. 6, pp. 20641 - 20651, 10 April 2018.
Technology (ICCICT), Mumbai, India, Feb. 2-3, 2018.
[34] M. Afifi, M. Nasser, M. Korashy, K. Rohde and A. Abdelrahim, "Can
[17] D. Chawla and M. C. Trivedi, "A Comparative Study on Face We Boost the Power of the Viola−Jones Face Detector Using Pre-
Detection Techniques for Security Surveillance," Advances in processing? An Empirical Study," Computer Science - Computer
Computer and Computational Sciences, Advances in Intelligent Vision and Pattern Recognition - arXiv:1709.07720, 11 December
Systems and Computing, vol. 554, pp. 531-541, 2017. 2017.
[18] N. S. Alamri and Y. M. Hausawi, "Evaluating Features Extraction
Techniques: A Comparative Study on Face Recognition,"

159

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy