MOHAN Project
MOHAN Project
by
A.MOHANRAJ
(20BCA1014)
Under the Guidance of
Mr.N.SURESH, M.Sc.,B.Ed., M.Phil.,
Accredited By NAAC with 'A++' Grade(Cycle II Recognized u/s 2(f) & 12(B) of the UGC Act, 1956
APR/MAY– 2023
MAHENDRA ARTS & SCIENCE COLLEGE (Autonomous)
Kalippatti
(Affiliated to Periyar University, Salem)
by
A.MOHANRAJ
(20BCA1014)
I A.MOHANRAJ hereby declare that the project work, entitled “REAL TIME EYE
DETECTION OPENCV IN PYTHON” submitted to the Mahendra Arts & Science College
(Autonomous), Kalippatti in partial fulfillment of the requirements for the award of the degree of
Bachelor of Computer Application is a record of the original project work done by me under
the supervision and guidance of Mr.R.SURESH,M.Sc.,B.Ed.,M.Phil., Assistant Professor,
Department of Computer Science & Applications, Mahendra Arts & Science College (Autonomous),
Kalippatti and it has not formed the basis for the award of any Degree / Diploma / Associateship /
Fellowship or other similar title to any candidate in any university.
Date: [A.MOHANRAJ]
ACKNOWLEDGEMENT
I am grateful for you and your generosity Smt.B.VALLIYAMMAL, M.A., B.Ed., Secretary,
Mahendra Educational Trust for providing excellent facilities.
I would like to convey my sincere gratitude and thanks to the Principal of Mahendra Arts &
Science College, (Autonomous) Dr. S. ARJUNAN, M.Sc., M.Phil., Ph.D for providing me
extremely useful and enlightening opportunity to inclusive this work.
I would also like to expand my deepest gratitude to all those who have directly and indirectly
guided us in writing this assignment work.
Finally, I wish to thank my parents for their support and encouragement throughout my study.
CONTENTS
1. INTRODUCTION 1
SYSTEM SPECIFICATION
2
2.1 HARDWARE SPECIFICATION
2.
2.2 SOFTWARE SPECIFICATION 2
SYSTEM STUDY AND ANALYSIS
3
3. 3.1 EXISTING SYSTEM
3.2 PROPOSED SYSTEM 4
SOFTWARE DESCRIPTIONS
5
4. 4.1 FRONT END
4.2 BACK END 14
PROJECT DESCRIPTION
5. 5.1 PROBLEM DEFINITION 17
5.3MODLUES DISCRIPTION 20
6.3UML DIAGRAM 25
7. TESTING 26
8. CONCLUSION 29
9. FUTURE ENHANCEMENT 30
APPENDIX
10.1 SOURCE CODE 31
10.
10.2 SCREEN SHOTS 52
11. REFERENCES 56
ABSTRACT
An accurate and efficient eye detector is essential for many computer vision applications .In
this method it present an efficient method to evaluate the eye location from facial images first a
group of candidate regions with regional extreme points is quickly proposed then a using
haarcascads is adopted to determine the most likely eye region and classify the region as left and
right eyes,In this method was faster and adaptable to variations of the image The Real Time Eye
Detection Opencv python was developed using python detection openCV, the eye tracking openCv
python it’s object detection with haarcascades in order do recognition/detection with cascades
files.A eye detection using opencv python you will detecting a human eye with the features
mappers know haarcascads.the project will use the python language along with opencv library for
algorithm execution image processing respectively the haarcascads use in this project pretrained
and along with opencv library as haarcascades _eye_default .xml.
1. INTRODUCTION
Eye dedection has become an important research topic in computer vision and pattern
recognition , because the human eyes location are essential information for many
application including psychological analysis,facial expression recognition,auxiliary
driving,and medical diagonis. However,eye detection is quite challenging in many
practical appications.the cameras are sensitive to light variations and the shooting
distance which makes the human eyes very eccentric in facial images sometimes the face
is partially occluded image for example of half the face wash coverd in cover test for
detecting squint eyes in this case some exiting eye detection method do not work because
the relay on facial model detection to locate the eyes.A eye detector is also expected to
work well in various image modalities ,that is, infrared and visible images moreover, the
eye detection algorithm should by fast because it is supported to be online in many
practical cases.although many methods have been proposed todetect the eyes form
facial images,it is difficult to find one method that perform well it term
accuracy,roubtness,and efficiency.therefore ,the attempting to develop an efficient and
robust eye detection algorithm to fulfil the requirements of the application as much as
possible.
1
2. SYSTEM SPECIFICATIONS
HARDWARE SPCIFICATION
SOFTWARE SPICIFICATION
2
3. SYSTEM STUDY AND ANALYSIS
EXISTING SYSTEM
In the existing system contain real time Right eye detection opencv pythonEye tracking
is accomplished recognizing and connecting the same eye features a cross several images
frames to a single eye. To determine their applicability for the proposed applications ,the
algorithms are tested for eye identification and tracking under a variety of
scenaious,including varied angles of the face ,head motion speed,and eye occlusions.
Limitations
The real time right eye detection opencv python eye tracking only single eye
detect
The one right eye detect there is no expression recognitions , auxiliary driving
It does not work with few users who wear contact lenses or eye lashes
It require some calibration time before it gives statisfactory results .hence few
users device themselves from using it
Eye movements of some users are un_intentional
3
PROPOSED SYSTEMS
The proposed eye detection is realtime eye in detection in two Eye detection is a crucial
aspect in many useful applications ranging from face recognition /detection to human
capture interface,driver behaviour analysis by locating the position of the eyes,the gaze
can determined.in this way it ispossible to know where people are looking at understand
the behaviours in order to evaluate the interest and the attention levels.generally the eye
detection the eyes consists two steps locating the face to extract eye regions and then eye
detection from eye window.the main objectives of our work is to propose an eyes
detection algorithms that is applicable in real time with standard camera.in this eye
detection using python opencv library we detection the human eyes real time.
ADVANTAGES:
4
4. SOFTWARE DESCRIPTION
FRONT END
PYTHON
Python features
Python is a MUST for students and working professionals to become a great Software
Engineer specially when they are working in Web Development Domain. I will list down
some of the key advantages of learning Python:
Python is Interactive − You can actually sit at a Python prompt and interact with
the interpreter directly to write your programs.
5
Characteristics of Python
It provides very high-level dynamic data types and supports dynamic type
checking.
Applications of Python
As mentioned before, Python is one of the most widely used language over the web. I'm
going to list few of them here:
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
6
Easy-to-read − Python code is more clearly defined and visible to the eyes.
A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than
shell scripting.
OpenCV (Open Source Computer Vision Library) is a open source computer vision
and machine learning software.
In the year 2006, its first major version, OpenCV 1.0 was released.
Computer Vision
Image processing deals with image-to-image transformation. The input and output of
image processing are both images.
Computer vision is the construction of explicit, meaningful descriptions of physical
objects from their image. The output of computer vision is a description or an
interpretation of structures in 3D scene.
8
Using OpenCV library
Detect specific objects such as faces, eyes, cars, in the videos or images.
Analyze the video, i.e., estimate the motion in it, subtract the background, and
track objects in it.
OpenCV was originally developed in C++. In addition to it, Python and Java bindings
were provided. OpenCV runs on various Operating Systems such as windows, Linux,
OSx, FreeBSD, Net BSD, Open BSD, etc.
This diagram explains the concepts of OpenCV with examples using python bindings.
9
OpenCV Library Modules
Core Functionality
This module covers the basic data structures such as Scalar, Point, Range, etc., that are
used to build OpenCV applications. In addition to these, it also includes the
multidimensional array Mat, which is used to store the images. In the Java library of
OpenCV, this module is included as a package with the name org.opencv.core.
Image Processing
This module covers various image processing operations such as image filtering,
geometrical image transformations, color space conversion, histograms, etc. In the Java
library of OpenCV, this module is included as a package with the
name org.opencv.imgproc.
Video
This module covers the video analysis concepts such as motion estimation, background
subtraction, and object tracking. In the Java library of OpenCV, this module is included
as a package with the name org.opencv.video.
Video I/O
This module explains the video capturing and video codecs using OpenCV library. In the
Java library of OpenCV, this module is included as a package with the
name org.opencv.videoio.
calib3d
10
features2d
This module includes the concepts of feature detection and description. In the Java library
of OpenCV, this module is included as a package with the name org.opencv.features2d.
Objdetect
This module includes the detection of objects and instances of the predefined classes such
as faces, eyes, mugs, people, cars, etc. In the Java library of OpenCV, this module is
included as a package with the name org.opencv.objdetect.
Highgui
Haar cascades, first introduced by Viola and Jones in their seminal 2001
publication, Rapid Object Detection using a Boosted Cascade of Simple Features, are
arguably OpenCV‘s most popular object detection algorithm.
Sure, many algorithms are more accurate than Haar cascades (HOG + Linear SVM,
SSDs, Faster R-CNN, YOLO, to name a few), but they are still relevant and useful today.
One of the primary benefits of Haar cascades is that they are just so fast — it‘s hard to
beat their speed.
The downside to Haar cascades is that they tend to be prone to false-positive detections,
require parameter tuning when being applied for inference/detection, and just, in general,
are not as accurate as the more ―modern‖ algorithms we have today.
11
1. An important part of the computer vision and image processing literature
In the remainder of this tutorial, you‘ll learn about Haar cascades, including how to use
them with OpenCV.
Haar cascades
The Haar cascade classifier is an effective way to detect various objects in the
surroundings. This method is also used in detection of face and eyes. The main objective
of the Haar cascade classifier is a collection a of a lot of positive images and negative
images which are later on to train the classifier.
Description. The cascade object detector uses the Viola-Jones algorithm to detect people's
faces, noses, eyes, mouth, or upper body. You can also use the Image Labeler to train a
custom classifier to use with this System object
12
First published by Paul Viola and Michael Jones in their 2001 paper, Rapid Object
Detection using a Boosted Cascade of Simple Features, this original work has become
one of the most cited papers in computer vision literature.
In their paper, Viola and Jones propose an algorithm that is capable of detecting objects in
images, regardless of their location and scale in an image. Furthermore, this algorithm can
run in real-time, making it possible to detect objects in video streams.
Specifically, Viola and Jones focus on detecting faces in images. Still, the framework can
be used to train detectors for arbitrary ―objects,‖ such as cars, buildings, kitchen utensils,
and even bananas.
While the Viola-Jones framework certainly opened the door to object detection, it is now
far surpassed by other methods, such as using Histogram of Oriented Gradients (HOG) +
Linear SVM and deep learning. We need to respect this algorithm and at least have a high-
level understanding of what‘s going on underneath the hood.
Recall when we discussed image and convolutions and how we slid a small matrix across
our image from left-to-right and top-to-bottom, computing an output value for each center
pixel of the kernel? Well, it turns out that this sliding window approach is also extremely
useful in the context of detecting objects in an image.
Haarcascade classifier
13
Haar Cascade used for detection
Positive images – These images contain the images which we want our
classifier to identify.
Negative Images – Images of everything else, which do not contain the
object we want to detect.
BACKEND
Tufts-Face-Database
Tufts Face Database is the most comprehensive, large-scale (over 10,000 images,
74 females + 38 males, from more than 15 countries with an age range between 4 to 70
years old) face dataset that contains 7 image modalities: visible, near-infrared, thermal,
computerized sketch, LYTRO, recorded video, and 3D images. This webpage/dataset
contains the Tufts Face Database three-dimensional (3D) images. The other datasets are
made available through separate links by the user.
Image Acquisition
Each participant was seated in front of a blue background in close proximity to the
camera. The cameras were mounted on tripods and the height of each camera was
adjusted manually to correspond to the image center. The distance to the participant was
strictly controlled during the acquisition process. A constant lighting condition was
maintained using diffused lights.
TD_3D: The images were captured using a quad camera (an array of 4 cameras).
Each individual was asked to look at a fixed view-point while the cameras were moved to
9 equidistant positions forming an approximate semi-circle around the individual. The 3D
models were reconstructed using open-source structure-from-motion algorithms.
15
TD_IR_E(E stands for expression/emotion): The images were captured using a
FLIR Vue Pro camera. Each participant was asked to pose with (1) a neutral expression,
(2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.
TD_IR_A (A stands for around): The images were captured using a FLIR Vue Pro
camera. Each participant was asked to look at a fixed view-point while the cameras were
moved to 9 equidistant positions forming an approximate semi-circle around the
participant .
TD_RGB_E: The images were captured using a NIKON D3100 camera. Each
participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed,
(4) exaggerated shocked expression, (5) sunglasses.
TD_RGB_A: The images were captured using a quad camera (an array of 4
visible field cameras). Each participant was asked to look at a fixed view-point while the
cameras were moved to 9 equidistant positions forming an approximate semi-circle
around the participant.
TD_NIR_A: The images were captured using a quad camera (an array of 4 night
vision cameras). The lighting condition for NIR imaging was maintained by using an
850nm Infrared 96 LED light system. Each participant was asked to look at a fixed view-
point while the cameras were moved to 9 equidistant positions forming an approximate
semi-circle around the participant.
TD_VIDEO: The images were captured using one of the visible field quad
cameras. Each participant was asked to look at a fixed view-point while the camera was
moved around the participant forming an approximate semi-circle.
16
5.PROJECT DESCRIPTION
PROBLEM DEFINITION
An accurate and efficient eye detector is essential for many computer vision applications
.In this method it present an efficient method to evaluate the eye location from facial
images first a group of candidate regions with regional extreme points is quickly
proposed then a using haarcascads is adopted to determine the most likely eye region
and classify the region as left and right eyes,In this method was faster and adaptable to
variations of the image The Real Time Eye Detection Opencv python was developed
using python detection openCV, the eye tracking openCv python it‘s object detection
with haarcascades in order do recognition/detection with cascades files.
A eye detection using opencv python you will detecting a human eye with the features
mappers know haarcascads.the project will use the python language along with opencv
library for algorithm execution image processing respectively the haarcascads use in this
project pretrained and along with opencv library as haarcascades _eye_default .xml
Eye dedection has become an important research topic in computer vision and pattern
recognition , because the human eyes location are essential information for many
application including psychological analysis,facial expression recognition,auxiliary
driving,and medical diagonis. However,eye detection is quite challenging in many
practical appications.the cameras are sensitive to light variations and the shooting
distance which makes the human eyes very eccentric in facial images sometimes the face
is partially occluded image for example of half the face wash coverd in cover test for
detecting squint eyes in this case some exiting eye detection method do not work because
the relay on facial model detection to locate the eyes.A eye detector is also expected to
work well in various image modalities ,that is, infrared and visible images moreover, the
eye detection algorithm should by fast because it is supported to be online in many
practical cases.although many methods have been proposed todetect the eyes form
facial images,it is difficult to find one method that perform well it term
accuracy,roubtness,and efficiency.therefore ,the attempting to develop an efficient and
robust eye detection algorithm to fulfil the requirements of the application as much as
possible.
17
OVERVIEW OF THE PROJECT
Eye detecting a blink of human eye with the feature mappers knows as haar cascades.
Here in the project, we will use the python language along with the OpenCV library for
the algorithm execution and image processing respectively. The haar cascades we are
going to use in the project are pretrained and stored along with the OpenCV library as
haarcascade_frontalface_default.xml and haarcascade_eye_tree_eyeglasses.xml files. The
project develops a basic understanding of the systems such as driver drowsiness
detection, eye blink locks, eye detection, face detection and also the haar cascades library.
AboutHaarCascades:
Haar feature-based cascade classifiers is an effective object detection method proposed by
Paul Viola and Michael Jones in their paper, ―Rapid Object Detection using a Boosted
Cascade of Simple Features‖ in 2001. It is a machine learning-based approach where a
cascade function is trained from a lot of positive and negative images. Here positive
images are the samples which contain the target object and negative are the one.
Now, we extract the features from the given input image with the haar features
shown in the above image. They are just like convolutional kernels. Each feature is a
single value obtained by subtracting the sum of pixels under the white rectangle from the
sum of pixels under the black rectangle.
18
TheExcessivecalculation:
With all the possible sizes of the classifiers the features are calculated, but the amount of
computation it takes to calculate the features, a 24×24 window results over 160000
features. Also for each feature calculation, the sum of the pixels is also needed. To make
it computationally less expensive the creators of haar cascades introduced the integral
image, which means however large your image, it reduces the calculations for a given .
Thefalsefeatures
Now among the features that are calculated, most of the features are false and irrelevant.
Now the window which is applied to a region of the image may see a different region
which seems with the same features to the window but is not in reality. So, there is a need
to remove the false features which were done by the AdaBoost which helped select the
best features out of 160000+ features. Adaboost short form of Adaptive Boosting is a
Machine learning algorithm which was used for this sole task.
Algorithm
19
MODULES DISCRIPTION
1. Importing OpenCV
2. Importing XML file
3. Allowing WebCam to capture video
4. Capturing video in terms of frames
5. Converting the image to greyscale
6. Detecting Multi-scale faces
7. Mentioning sides of the rectangle for face detection
8. Displaying the detected Video
It can install and set up OpenCV to python using the command prompt, type (pip
install OpenCV-python). OpenCV provides a real-time optimized Computer
Vision library, tools, and hardware and the same will be used in our project.
Import numpy as np
Import cv2
Eye_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye_tree_eyeglasses.xml)
20
STEP 3 – Allowing WebCam to capture video
cv2.VideoCapture() allows the program to access the webcam only if the function has the
parameter 0. In other cases, if rather than using webCam one wishes to detect faces using
a video, then the parameter 0 in the function can be easily replaced with the video file
name.
first_read = True
cap = cv2.VideoCapture(0)
ret,img = cap.read()
while(ret):
ret,img = cap.read()
gray = cv2.bilateralFilter(gray,5,1,1)
faces=face_cascade.detectmultiple1.3,5,minsize=(200,200))
21
STEP 7 – Mentioning sides of the rectangle for face detection
This function helps us to mention the dimensions thickness and color of the rectangle that
will be visible during the face detection.
a = cv2.waitKey(1)
22
6. SYSTEM DESIGN AND IMPLEMENMTATION
23
USE CASE DIAGRAM
A use case diagram is a way to summarize details of a system and the
users within that system. It is generally shown as a graphic depiction of
interactions among different elements in a system. Use case diagrams will
specify the events in a system and how those events flow, however, use case
diagram does not describe how those events are implemented.
24
UML DIAGRAM
25
7. TESTING
SYSTEM TESTING
INTEGRATION TESTING
Testing is done for each module. After testing all the modules, the
modules are integrated and testing of the final system is done with the test data,
specially designed toshow that the system will operate successfully in all its
aspects conditions. Thus the system testing is a confirmation that all its correct
and an opportunity to show the user that the system works.
VALIDATION TESTING
26
ACCEPTANCE TESTING
Acceptance testing can be defined in many ways, but a simple definition is the succeeds
when the software functions in a manner that can be reasonable expected by the customer. After
the acceptance test has been conducted, one of the two possible conditions exists. This is to fine
whether the inputs are accepted by the database or other validations. For example accept only
numbers in the numeric field, date format data in the date field. Also the null check for the not
null fields. If any error occurs then show the error messages. The function of performance
characteristics to specification and is accepted. A deviation from specification is uncovered and a
deficiency list is created.
This testing is also called as glass box testing. In this testing, by knowing the specified
function that a product has been designed to perform test can be conducted thatdemonstrates each
function is fully operation at the same time searching for errors in each function. It is a test case
design method that uses the control structure of the procedural design to derive test cases. Basis
path testing is a white box testing.
In this testing by knowing the internal operation of a product, tests can be conducted to
ensure that ― all gears mesh‖, that is the internal operation performs according to specification
and all internal components have been adequately exercised. It fundamentally focuses on the
functional requirements of the software.
27
IMPLEMENTATION
28
8. CONCLUSION
The project has been appriciated by all the users organisazation.it have shown that
possible for an un modified web camera to be used for eye detection if Further research
help us to achieve the specified goal and precausion,a useable treacking interface could be
implemented which required no special hardware or setupcost and cost are involves a
simple software tool.further more,it could be always eye detection.
29
9. FUTURE ENHANCEMENT
The python opencv eye detection to detect and track eye images with complex
background distinctive features of user eye are used opencv uses machine learning
algorthom search faces within a picture
39
10. APPENDIX
SOURCE CODE
import cv2
face_cascade=
cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
eye_cascade = cv2.CascadeClassifier("haarcascade_eye_tree_eyeglasses.xml")
cap = cv2.VideoCapture(0)
while 1:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)
31
# Detects eyes of different sizes in the input image
eyes = eye_cascade.detectMultiScale(roi_gray)
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,127,255),2)
cv2.imshow('img',img)
k = cv2.waitKey(5)
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
<?xml version="1.0"?>
<!--
32
////////////////////////////////////////////////////////////////////////////////////////
By downloading, copying, installing or using the software you agree to this license.
Redistribution and use in source and binary forms, with or without modification,
33
*The name of Intel Corporation may not be used to endorse or promote products
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
In no event shall the Intel Corporation or contributors be liable for any direct,
the use of this software, even if advised of the possibility of such damage.
-->
<opencv_storage>
<cascade type_id="opencv-cascade-classifier"><stageType>BOOST</stageType>
<featureType>HAAR</featureType>
<height>20</height>
<width>20</width>
<stageParams>
<maxWeakCount>93</maxWeakCount></stageParams>
<featureParams>
<maxCatCount>0</maxCatCount></featureParams>
<stageNum>24</stageNum>
<stages>
<_>
<maxWeakCount>6</maxWeakCount>
34
<stageThreshold>-1.4562760591506958e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 0 1.2963959574699402e-01</internalNodes>
<leafValues>
-7.7304208278656006e-01 6.8350148200988770e-01</leafValues></_>
<_>
<internalNodes>
0 -1 1 -4.6326808631420135e-02</internalNodes>
<leafValues>
5.7352751493453979e-01 -4.9097689986228943e-01</leafValues></_>
<_>
<internalNodes>
0 -1 2 -1.6173090785741806e-02</internalNodes>
<leafValues>
6.0254341363906860e-01 -3.1610709428787231e-01</leafValues></_>
<_>
<internalNodes>
0 -1 3 -4.5828841626644135e-02</internalNodes>
<leafValues>
6.4177548885345459e-01 -1.5545040369033813e-01</leafValues></_>
<_>
<internalNodes>
0 -1 4 -5.3759619593620300e-02</internalNodes>
<leafValues>
5.4219317436218262e-01 -2.0480829477310181e-01</leafValues></_>
35
<_>
<internalNodes>
0 -1 5 3.4171190112829208e-02</internalNodes>
<leafValues>
-2.3388190567493439e-01 01</leafValues></_></weakClassifiers></_>
<_>
<maxWeakCount>12</maxWeakCount>
<stageThreshold>-1.2550230026245117e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 6 -2.1727620065212250e-01</internalNodes>
<leafValues>
7.1098899841308594e-01 -5.9360730648040771e-01</leafValues></_>
<_>
<internalNodes>
0 -1 7 1.2071969918906689e-02</internalNodes>
<leafValues>
-2.8240481019020081e-01 5.9013551473617554e-01</leafValues></_>
<_>
<internalNodes>
0 -1 8 -1.7854139208793640e-02</internalNodes>
<leafValues>
5.3137522935867310e-01 -2.2758960723876953e-01</leafValues></_>
<_>
<internalNodes>
36
0 -1 9 2.2333610802888870e-02</internalNodes>
<leafValues>
-1.7556099593639374e-01 6.3356137275695801e-01</leafValues></_>
<_>
<internalNodes>
0 -1 10 -9.1420017182826996e-02</internalNodes>
<leafValues>
6.1563092470169067e-01 -1.6899530589580536e-01</leafValues></_>
<_>
<internalNodes>
0 -1 11 2.8973650187253952e-02</internalNodes>
<leafValues>
-1.2250079959630966e-01 7.4401170015335083e-01</leafValues></_>
<_>
<internalNodes>
0 -1 12 7.8203463926911354e-03</internalNodes>
<leafValues>
1.6974370181560516e-01 -6.5441650152206421e-01</leafValues></_>
<_>
<internalNodes>
0 -1 13 2.0340489223599434e-02</internalNodes>
<leafValues>
-1.2556649744510651e-01 8.2710450887680054e-01</leafValues></_>
<_>
<internalNodes>
0 -1 14 -1.1926149949431419e-02</internalNodes>
<leafValues>
37
3.8605681061744690e-01 -2.0992340147495270e-01</leafValues></_>
<_>
<internalNodes>
0 -1 15 -9.7281101625412703e-04</internalNodes>
<leafValues>
-6.3761192560195923e-01 1.2952390313148499e-01</leafValues></_>
<_>
<internalNodes>
0 -1 16 1.8322050891583785e-05</internalNodes>
<leafValues>
-3.4631478786468506e-01 2.2924269735813141e-01</leafValues></_>
<_>
<internalNodes>
0 -1 17 -8.0854417756199837e-03</internalNodes>
<leafValues>
-6.3665801286697388e-01 01</leafValues></_></weakClassifiers></_>
<_>
<maxWeakCount>9</maxWeakCount>
<stageThreshold>-1.3728189468383789e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 18 -1.1812269687652588e-01</internalNodes>
<leafValues>
6.7844521999359131e-01 -5.0045782327651978e-01</leafValues></_>
<_>
<internalNodes>
38
0 -1 19 -3.4332759678363800e-02</internalNodes>
<leafValues>
6.7186361551284790e-01 -3.5744878649711609e-01</leafValues></_>
<_>
<internalNodes>
0 -1 20 -2.1530799567699432e-02</internalNodes>
<leafValues>
7.2220700979232788e-01 -1.8192419409751892e-01</leafValues></_>
<_>
<internalNodes>
0 -1 21 -2.1909970790147781e-02</internalNodes>
<leafValues>
6.6529387235641479e-01 -2.7510228753089905e-01</leafValues></_>
<_>
<internalNodes>
0 -1 22 -2.8713539242744446e-02</internalNodes>
<leafValues>
6.9955700635910034e-01 -1.9615580141544342e-01</leafValues></_>
<_>
<internalNodes>
0 -1 23 -1.1467480100691319e-02</internalNodes>
<leafValues>
5.9267348051071167e-01 -2.2097350656986237e-01</leafValues></_>
<_>
<internalNodes>
0 -1 24 -2.2611169144511223e-02</internalNodes>
<leafValues>
39
3.4483069181442261e-01 -3.8379558920860291e-01</leafValues></_>
<_>
<internalNodes>
0 -1 25 -1.9308089977130294e-03</internalNodes>
<leafValues>
-7.9445719718933105e-01 1.5628659725189209e-01</leafValues></_>
<_>
<internalNodes>
0 -1 26 5.6419910833938047e-05</internalNodes>
<leafValues>
-3.0896010994911194e-01 01</leafValues></_></weakClassifiers></_>
<_>
<maxWeakCount>16</maxWeakCount>
<stageThreshold>-1.2879480123519897e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 27 1.9886520504951477e-01</internalNodes>
<leafValues>
-5.2860701084136963e-01 3.5536721348762512e-01</leafValues></_>
<_>
<internalNodes>
0 -1 28 -3.6008939146995544e-02</internalNodes>
<leafValues>
4.2109689116477966e-01 -3.9348980784416199e-01</leafValues></_>
<_>
<internalNodes>
40
0 -1 29 -7.7569849789142609e-02</internalNodes>
<leafValues>
4.7991541028022766e-01 -2.5122168660163879e-01</leafValues></_>
<_>
<internalNodes>
0 -1 30 8.2630853285081685e-05</internalNodes>
<leafValues>
-3.8475489616394043e-01 3.1849220395088196e-01</leafValues></_>
<_>
<internalNodes>
0 -1 31 3.2773229759186506e-04</internalNodes>
<leafValues>
-2.6427319645881653e-01 3.2547241449356079e-01</leafValues></_>
<_>
<internalNodes>
0 -1 32 -1.8574850633740425e-02</internalNodes>
<leafValues>
4.6736589074134827e-01 -1.5067270398139954e-01</leafValues></_>
<_>
<internalNodes>
0 -1 33 -7.0008762122597545e-05</internalNodes>
<leafValues>
2.9313150048255920e-01 -2.5365099310874939e-01</leafValues></_>
<_>
<internalNodes>
0 -1 34 -1.8552130088210106e-02</internalNodes>
<leafValues>
41
4.6273660659790039e-01 -1.3148050010204315e-01</leafValues></_>
<_>
<internalNodes>
0 -1 35 -1.3030420057475567e-02</internalNodes>
<leafValues>
4.1627219319343567e-01 -1.7751489579677582e-01</leafValues></_>
<_>
<internalNodes>
0 -1 36 6.5694141085259616e-05</internalNodes>
<leafValues>
-2.8035101294517517e-01 2.6680740714073181e-01</leafValues></_>
<_>
<internalNodes>
0 -1 37 1.7005260451696813e-04</internalNodes>
<leafValues>
-2.7027249336242676e-01 2.3981650173664093e-01</leafValues></_>
<_>
<internalNodes>
0 -1 38 -3.3129199873656034e-03</internalNodes>
<leafValues>
4.4411438703536987e-01 -1.4428889751434326e-01</leafValues></_>
<_>
<internalNodes>
0 -1 39 1.7583490116521716e-03</internalNodes>
<leafValues>
-1.6126190125942230e-01 4.2940768599510193e-01</leafValues></_>
<_>
42
<internalNodes>
0 -1 40 -2.5194749236106873e-02</internalNodes>
<leafValues>
4.0687298774719238e-01 -1.8202580511569977e-01</leafValues></_>
<_>
<internalNodes>
0 -1 41 1.4031709870323539e-03</internalNodes>
<leafValues>
8.4759786725044250e-02 -8.0018568038940430e-01</leafValues></_>
<_>
<internalNodes>
0 -1 42 -7.3991729877889156e-03</internalNodes>
<leafValues>
5.5766099691390991e-01 - 01</leafValues></_></weakClassifiers></_>
<_>
<maxWeakCount>23</maxWeakCount>
<stageThreshold>-1.2179850339889526e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 43 -2.9943080618977547e-02</internalNodes>
<leafValues>
3.5810810327529907e-01 -3.8487631082534790e-01</leafValues></_>
<_>
<internalNodes>
0 -1 44 -1.2567380070686340e-01</internalNodes>
<leafValues>
43
3.9316931366920471e-01 -3.0012258887290955e-01</leafValues></_>
<_>
<internalNodes>
0 -1 45 5.3635272197425365e-03</internalNodes>
<leafValues>
-4.3908619880676270e-01 1.9257010519504547e-01</leafValues></_>
<_>
<internalNodes>
0 -1 46 -8.0971820279955864e-03</internalNodes>
<leafValues>
3.9906668663024902e-01 -2.3407870531082153e-01</leafValues></_>
<_>
<internalNodes>
0 -1 47 -1.6597909852862358e-02</internalNodes>
<leafValues>
4.2095288634300232e-01 -2.2674840688705444e-01</leafValues></_>
<_>
<internalNodes>
0 -1 48 -2.0199299324303865e-03</internalNodes>
<leafValues>
-7.4156731367111206e-01 1.2601189315319061e-01</leafValues></_>
<_>
<internalNodes>
0 -1 49 -1.5202340437099338e-03</internalNodes>
<leafValues>
-7.6154601573944092e-01 8.6373612284660339e-02</leafValues></_>
<_>
44
<internalNodes>
0 -1 50 -4.9663940444588661e-03</internalNodes>
<leafValues>
4.2182239890098572e-01 -1.7904919385910034e-01</leafValues></_>
<_>
<internalNodes>
0 -1 51 -1.9207600504159927e-02</internalNodes>
<leafValues>
4.6894899010658264e-01 -1.4378750324249268e-01</leafValues></_>
<_>
<internalNodes>
0 -1 52 -1.2222680263221264e-02</internalNodes>
<leafValues>
3.2842078804969788e-01 -2.1802149713039398e-01</leafValues></_>
<_>
<internalNodes>
0 -1 53 5.7548668235540390e-02</internalNodes>
<leafValues>
-3.6768808960914612e-01 2.4357110261917114e-01</leafValues></_>
<_>
<internalNodes>
0 -1 54 -9.5794079825282097e-03</internalNodes>
<leafValues>
-7.2245067358016968e-01 6.3664563000202179e-02</leafValues></_>
<_>
<internalNodes>
0 -1 55 -2.9545740690082312e-03</internalNodes>
45
<leafValues>
3.5846439003944397e-01 -1.6696329414844513e-01</leafValues></_>
<_>
<internalNodes>
0 -1 56 -4.2017991654574871e-03</internalNodes>
<leafValues>
3.9094808697700500e-01 -1.2041790038347244e-01</leafValues></_>
<_>
<internalNodes>
0 -1 57 -1.3624990358948708e-02</internalNodes>
<leafValues>
-5.8767718076705933e-01 8.8404729962348938e-02</leafValues></_>
<_>
<internalNodes>
0 -1 58 6.2853112467564642e-05</internalNodes>
<leafValues>
-2.6348459720611572e-01 2.1419279277324677e-01</leafValues></_>
<_>
<internalNodes>
0 -1 59 -2.6782939676195383e-03</internalNodes>
<leafValues>
-7.8390169143676758e-01 8.0526962876319885e-02</leafValues></_>
<_>
<internalNodes>
0 -1 60 -7.0597179234027863e-02</internalNodes>
<leafValues>
4.1469261050224304e-01 -1.3989959657192230e-01</leafValues></_>
46
<_>
<internalNodes>
0 -1 61 9.2093646526336670e-02</internalNodes>
<leafValues>
-1.3055180013179779e-01 5.0435781478881836e-01</leafValues></_>
<_>
<internalNodes>
0 -1 62 -8.8004386052489281e-03</internalNodes>
<leafValues>
3.6609750986099243e-01 -1.4036649465560913e-01</leafValues></_>
<_>
<internalNodes>
0 -1 63 7.5080977694597095e-05</internalNodes>
<leafValues>
-2.9704439640045166e-01 2.0702940225601196e-01</leafValues></_>
<_>
<internalNodes>
0 -1 64 -2.9870450962334871e-03</internalNodes>
<leafValues>
3.5615700483322144e-01 -1.5445969998836517e-01</leafValues></_>
<_>
<internalNodes>
0 -1 65 -2.6441509835422039e-03</internalNodes>
<leafValues>
-5.4353517293930054e-01 01</leafValues></_></weakClassifiers></_>
<_>
<maxWeakCount>27</maxWeakCount>
47
<stageThreshold>-1.2905240058898926e+00</stageThreshold>
<weakClassifiers>
<_>
<internalNodes>
0 -1 66 -4.7862470149993896e-02</internalNodes>
<leafValues>
4.1528239846229553e-01 -3.4185820817947388e-01</leafValues></_>
<_>S
<internalNodes>
0 -1 67 8.7350532412528992e-02</internalNodes>
<leafValues>
-3.8749781250953674e-01 2.4204200506210327e-01</leafValues></_>
<_>
<internalNodes>
0 -1 68 -1.6849499195814133e-02</internalNodes>
<leafValues>
5.3082478046417236e-01 -1.7282910645008087e-01</leafValues></_>
<_>
<internalNodes>
0 -1 69 -2.8870029374957085e-02</internalNodes>
<leafValues>
3.5843509435653687e-01 -2.2402590513229370e-01</leafValues></_>
<_>
<internalNodes>
0 -1 70 2.5679389946162701e-03</internalNodes>
<leafValues>
1.4990499615669250e-01 -6.5609407424926758e-01</leafValues></_>
48
<_>
<internalNodes>
0 -1 71 -2.4116659536957741e-02</internalNodes>
<leafValues>
5.5889678001403809e-01 -1.4810280501842499e-01</leafValues></_>
<_>
<
<internalNodes>
0 -1 218 -2.4550149682909250e-03</internalNodes>
<leafValues>
2.3330999910831451e-01 -1.3964480161666870e-01</leafValues></_>
<_>
<internalNodes>
0 -1 219 1.2721839593723416e-03</internalNodes>
<leafValues>
6.0480289161205292e-02 -4.9456089735031128e-01</leafValues></_>
<_>
<internalNodes>
0 -1 220 -4.8933159559965134e-03</internalNodes>
<leafValues>
-6.6833269596099854e-01 4.6218499541282654e-02</leafValues></_>
<_>
<internalNodes>
0 -1 221 2.6449989527463913e-02</internalNodes>
<leafValues>
-7.3235362768173218e-02 4.4425961375236511e-01</leafValues></_>
<_>
49
<internalNodes>
0 -1 222 -3.3706070389598608e-03</internalNodes>
<leafValues>
-4.2464339733123779e-01 6.8676561117172241e-02</leafValues></_>
<_>
<internalNodes>
0 -1 223 -2.9559480026364326e-03</internalNodes>
<leafValues>
1.6218039393424988e-01 -1.8222999572753906e-01</leafValues></_>
<_>
<internalNodes>
0 -1 224 3.0619909986853600e-02</internalNodes>
<leafValues>
-5.8643341064453125e-02 5.3263628482818604e-01</leafValues></_>
<_>
<_>
<internalNodes>
0 -1 350 -9.1709550470113754e-03</internalNodes>
<leafValues>
-7.5553297996520996e-01 5.2640449255704880e-02</leafValues></_>
<_>
<internalNodes>
0 -1 351 6.1552738770842552e-03</internalNodes>
<leafValues>
9.0939402580261230e-02 -4.4246131181716919e-01</leafValues></_>
50
<_>
<internalNodes>
0 -1 352 -1.0043520014733076e-03</internalNodes>
<le
<rects
<_>
14 4 3 15 -1.</_>
<_>
15 4 1 15 3.</_></rects></_>
<_>
<rects>
<_>
19 13 1 2 -1.</_>
<_>
19 14 1 1 2.</_></rects></_>
<_>
<rects>
<_>
2 6 5 8 -1.</_>
<_>
2 10 5 4 2.</_></rects></_></features></cascade>
</opencv_storage>
51
SCREEN SHOTS
Face detecting
52
Haar cascades
53
Truft facedatabase gathering
54
Opencv eyes detection
55
11. REFERENCES
56