0% found this document useful (0 votes)
13 views62 pages

MOHAN Project

The document presents a project titled 'Real Time Eye Detection OpenCV in Python' submitted by A. Mohanraj for the Bachelor of Computer Applications degree at Mahendra Arts & Science College. It details the project's objectives, methodologies, and the use of Python and OpenCV for eye detection, highlighting the challenges and advantages of the proposed system. The project aims to develop an efficient eye detection algorithm applicable in real-time scenarios, addressing limitations of existing systems.

Uploaded by

saitamaaaa33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views62 pages

MOHAN Project

The document presents a project titled 'Real Time Eye Detection OpenCV in Python' submitted by A. Mohanraj for the Bachelor of Computer Applications degree at Mahendra Arts & Science College. It details the project's objectives, methodologies, and the use of Python and OpenCV for eye detection, highlighting the challenges and advantages of the proposed system. The project aims to develop an efficient eye detection algorithm applicable in real-time scenarios, addressing limitations of existing systems.

Uploaded by

saitamaaaa33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

REAL TIME EYE DETECTION OPENCV IN PYTHON

A project submitted in partial fulfillment of the requirements for the degree of


Bachelor of Computer Applications to the
Mahendra Arts & Science College (Autonomous)

by
A.MOHANRAJ
(20BCA1014)
Under the Guidance of
Mr.N.SURESH, M.Sc.,B.Ed., M.Phil.,

DEPARTMENT OF COMPUTER SCIENCE & APPLICATIONS


MAHENDRA ARTS & SCIENCE COLLEGE
(Autonomous)

Affiliated to Periyar University

Accredited By NAAC with 'A++' Grade(Cycle II Recognized u/s 2(f) & 12(B) of the UGC Act, 1956

Kalippatti (Po), Namakkal (Dt) - 637501

APR/MAY– 2023
MAHENDRA ARTS & SCIENCE COLLEGE (Autonomous)
Kalippatti
(Affiliated to Periyar University, Salem)

This is to certify that the project entitled


REAL TIME EYE DETECTION OPENCV IN PYTHON
is the Bonafide record of project work done

by
A.MOHANRAJ
(20BCA1014)

A project submitted in partial fulfillment of the requirements for the degree of


Bachelor of Computer Applications to the
Mahendra Arts & Science College (Autonomous)

Mr.N.SURESH, M.Sc.,B.Ed., M.Phil., Dr.R. INDHUMATHI,MSc.,M.Phil., Ph.D.,


Assistant Professor Head of the Department
Dept. of Computer Science & Applications, Dept. of Computer Science & Applications
Mahendra Arts & Science College (Autonomous), Mahendra Arts & Science College (Autonomous),
Kalippatti-637501. Kalippatti-637501

Submitted for viva – voce examination held on

Internal Examiner External Examiner


DECLARATION

I A.MOHANRAJ hereby declare that the project work, entitled “REAL TIME EYE
DETECTION OPENCV IN PYTHON” submitted to the Mahendra Arts & Science College
(Autonomous), Kalippatti in partial fulfillment of the requirements for the award of the degree of
Bachelor of Computer Application is a record of the original project work done by me under
the supervision and guidance of Mr.R.SURESH,M.Sc.,B.Ed.,M.Phil., Assistant Professor,
Department of Computer Science & Applications, Mahendra Arts & Science College (Autonomous),
Kalippatti and it has not formed the basis for the award of any Degree / Diploma / Associateship /
Fellowship or other similar title to any candidate in any university.

Place: Kalippatti Signature of the Candidate

Date: [A.MOHANRAJ]
ACKNOWLEDGEMENT

I would like to express my deepest gratitude to Shri. M. G. BHARATHKUMAR, M.A., B.Ed.,


Chairman of Mahendra Educational Trust for offering me an opportunity and providing me all the
facilities to do my Project work.

I am grateful for you and your generosity Smt.B.VALLIYAMMAL, M.A., B.Ed., Secretary,
Mahendra Educational Trust for providing excellent facilities.

I extended my sincere thanks to the Managing Directors of Mahendra Educational Trust


Mr. Ba. MAHENDHIRAN and Mr. B. MAHA AJAY PRASATH.

I would like to convey my sincere gratitude and thanks to the Principal of Mahendra Arts &
Science College, (Autonomous) Dr. S. ARJUNAN, M.Sc., M.Phil., Ph.D for providing me
extremely useful and enlightening opportunity to inclusive this work.

I am ineffably indebted to Dr. J. JOSEPHINE DAISY., M.Com., M.Phil.,MBA., Ph.D., the


Controller of Examinations of Mahendra Arts & Science College, (Autonomous) for
conscientious guidance and encouragement to accomplish this project work.

I express my profound thanks to Dr.R.INDHUMATHI, M.Sc.,M.Phil.,Ph.D., Head,


Department of Computer Science & Applications, for her advice and assistance in keeping my
progress on schedule.

I would like to express my deep gratitude to Mr.N.SURESH,M.Sc.,B.Ed.,M.Phil.,Assistant


Professor, Department of Computer Science & Applications for my project guide, for their
patient guidance, enthusiastic encouragement and useful critiques of this work.

I would also like to expand my deepest gratitude to all those who have directly and indirectly
guided us in writing this assignment work.

Finally, I wish to thank my parents for their support and encouragement throughout my study.
CONTENTS

SoNo PARTICULARS PAGE


. No.
ABSTRACT vi

1. INTRODUCTION 1
SYSTEM SPECIFICATION
2
2.1 HARDWARE SPECIFICATION
2.
2.2 SOFTWARE SPECIFICATION 2
SYSTEM STUDY AND ANALYSIS
3
3. 3.1 EXISTING SYSTEM
3.2 PROPOSED SYSTEM 4
SOFTWARE DESCRIPTIONS
5
4. 4.1 FRONT END
4.2 BACK END 14
PROJECT DESCRIPTION
5. 5.1 PROBLEM DEFINITION 17

5.2PROJECT OVER VIEW 18

5.3MODLUES DISCRIPTION 20

6. SYSTEM DESIGN AND IMPLEMENTATION

6.1 DATA FLOW DIAGRAM 23

6.2USE CASE DIAGRAM 24

6.3UML DIAGRAM 25

7. TESTING 26

8. CONCLUSION 29

9. FUTURE ENHANCEMENT 30
APPENDIX
10.1 SOURCE CODE 31
10.
10.2 SCREEN SHOTS 52

11. REFERENCES 56
ABSTRACT

An accurate and efficient eye detector is essential for many computer vision applications .In
this method it present an efficient method to evaluate the eye location from facial images first a
group of candidate regions with regional extreme points is quickly proposed then a using
haarcascads is adopted to determine the most likely eye region and classify the region as left and
right eyes,In this method was faster and adaptable to variations of the image The Real Time Eye
Detection Opencv python was developed using python detection openCV, the eye tracking openCv
python it’s object detection with haarcascades in order do recognition/detection with cascades
files.A eye detection using opencv python you will detecting a human eye with the features
mappers know haarcascads.the project will use the python language along with opencv library for
algorithm execution image processing respectively the haarcascads use in this project pretrained
and along with opencv library as haarcascades _eye_default .xml.
1. INTRODUCTION

Eye dedection has become an important research topic in computer vision and pattern
recognition , because the human eyes location are essential information for many
application including psychological analysis,facial expression recognition,auxiliary
driving,and medical diagonis. However,eye detection is quite challenging in many
practical appications.the cameras are sensitive to light variations and the shooting
distance which makes the human eyes very eccentric in facial images sometimes the face
is partially occluded image for example of half the face wash coverd in cover test for
detecting squint eyes in this case some exiting eye detection method do not work because
the relay on facial model detection to locate the eyes.A eye detector is also expected to
work well in various image modalities ,that is, infrared and visible images moreover, the
eye detection algorithm should by fast because it is supported to be online in many
practical cases.although many methods have been proposed todetect the eyes form
facial images,it is difficult to find one method that perform well it term
accuracy,roubtness,and efficiency.therefore ,the attempting to develop an efficient and
robust eye detection algorithm to fulfil the requirements of the application as much as
possible.

1
2. SYSTEM SPECIFICATIONS

HARDWARE SPCIFICATION

 PROCESSOR : INTEL CORE 15


 HARD DISK : 256 GB
 MONITOR : DELL ULTRASHARP UP3221Q
 MOUSE : LOGITECH
 RAM : 8GB

SOFTWARE SPICIFICATION

 FRONT END : PYTHON USING OPENCV


 BACK END : TRUFT FACE DATABASE
 OPERATING SYSTEM : WINDOWS 11
 SOFTWARE USED : VISUAL STUDIO CODE

2
3. SYSTEM STUDY AND ANALYSIS

EXISTING SYSTEM

In the existing system contain real time Right eye detection opencv pythonEye tracking
is accomplished recognizing and connecting the same eye features a cross several images
frames to a single eye. To determine their applicability for the proposed applications ,the
algorithms are tested for eye identification and tracking under a variety of
scenaious,including varied angles of the face ,head motion speed,and eye occlusions.

Limitations

 The real time right eye detection opencv python eye tracking only single eye
detect
 The one right eye detect there is no expression recognitions , auxiliary driving
 It does not work with few users who wear contact lenses or eye lashes
 It require some calibration time before it gives statisfactory results .hence few
users device themselves from using it
 Eye movements of some users are un_intentional

3
PROPOSED SYSTEMS

The proposed eye detection is realtime eye in detection in two Eye detection is a crucial
aspect in many useful applications ranging from face recognition /detection to human
capture interface,driver behaviour analysis by locating the position of the eyes,the gaze
can determined.in this way it ispossible to know where people are looking at understand
the behaviours in order to evaluate the interest and the attention levels.generally the eye
detection the eyes consists two steps locating the face to extract eye regions and then eye
detection from eye window.the main objectives of our work is to propose an eyes
detection algorithms that is applicable in real time with standard camera.in this eye
detection using python opencv library we detection the human eyes real time.

ADVANTAGES:

 It helps to learn from expects delivering the skills


 It makes technology more intuitive
 It help to communicate with machine in order to automate manual tasks
 It increases user experience and performance in humaneyes reaction
 This gives insight into cognitive process underlying a wide variety of human
behaviour and can reveal things such as learning patters and social interaction
methods

4
4. SOFTWARE DESCRIPTION

FRONT END

PYTHON

Python is a general-purpose interpreted, interactive, object-oriented, and high-level


programming language. It was created by Guido van Rossum during 1985- 1990.

Python features

Python is a high-level, interpreted, interactive and object-oriented scripting language.


Python is designed to be highly readable. It uses English keywords frequently where as
other languages use punctuation, and it has fewer syntactical constructions than other
languages

Python is a MUST for students and working professionals to become a great Software
Engineer specially when they are working in Web Development Domain. I will list down
some of the key advantages of learning Python:

 Python is Interpreted − Python is processed at runtime by the interpreter. You


do not need to compile your program before executing it. This is similar to PERL
and PHP.

 Python is Interactive − You can actually sit at a Python prompt and interact with
the interpreter directly to write your programs.

 Python is Object-Oriented − Python supports Object-Oriented style or technique


of programming that encapsulates code within objects

 Python is a Beginner's Language − Python is a great language for the beginner-


level programmers and supports the development of a wide range of applications
from simple text processing to WWW browsers to games. 

5
Characteristics of Python

Following are important characteristics of Python Programming −

 It supports functional and structured programming methods as well as OOP.

 It can be used as a scripting language or can be compiled to byte-code for building


large applications.

 It provides very high-level dynamic data types and supports dynamic type
checking.

 It supports automatic garbage collection.

 It can be easily integrated with C, C++, COM, ActiveX

Applications of Python

As mentioned before, Python is one of the most widely used language over the web. I'm
going to list few of them here:

 Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.

6
 Easy-to-read − Python code is more clearly defined and visible to the eyes.

 Easy-to-maintain − Python's source code is fairly easy-to-maintain.

 A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.

 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.

 Portable − Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.

 Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.

 Databases − Python provides interfaces to all major commercial databases.

 GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.

 Scalable − Python provides a better structure and support for large programs than
shell scripting.

Python using opencv

OpenCV (Open Source Computer Vision Library) is a open source computer vision
and machine learning software.

A Brief History of OpenCV

OpenCV was initially an Intel research initiative to advise CPU-intensive applications.

 In the year 2006, its first major version, OpenCV 1.0 was released.

 In October 2009, the second major version, OpenCV 2 was released.

 In August 2012, OpenCV was taken by a nonprofit organization OpenCV.org. 


7
 OpenCV is a cross-platform library using which we can develop real-
time computer vision applications. It mainly focuses on image processing, video
capture and analysis including features like face detection and object detection.

Let‘s start the chapter by defining the term "Computer Vision".

Computer Vision

Computer Vision can be defined as a discipline that explains how to reconstruct,


interrupt, and understand a 3D scene from its 2D images, in terms of the properties of the
structure present in the scene. It deals with modeling and replicating human vision using
computer software and hardware.
Computer Vision overlaps significantly with the following fields −
 Image Processing − It focuses on image manipulation.
 Pattern Recognition − It explains various techniques to classify patterns.

 Photogrammetry − It is concerned with obtaining accurate measurements from


images.

Computer Vision Vs Image Processing

Image processing deals with image-to-image transformation. The input and output of
image processing are both images.
Computer vision is the construction of explicit, meaningful descriptions of physical
objects from their image. The output of computer vision is a description or an
interpretation of structures in 3D scene.

8
Using OpenCV library

 Read and write images

 Capture and save videos

 Process images (filter, transform)

 Perform feature detection

 Detect specific objects such as faces, eyes, cars, in the videos or images.

 Analyze the video, i.e., estimate the motion in it, subtract the background, and
track objects in it.

OpenCV was originally developed in C++. In addition to it, Python and Java bindings
were provided. OpenCV runs on various Operating Systems such as windows, Linux,
OSx, FreeBSD, Net BSD, Open BSD, etc.

This diagram explains the concepts of OpenCV with examples using python bindings.

9
OpenCV Library Modules

Following are the main library modules of the OpenCV library.

Core Functionality

This module covers the basic data structures such as Scalar, Point, Range, etc., that are
used to build OpenCV applications. In addition to these, it also includes the
multidimensional array Mat, which is used to store the images. In the Java library of
OpenCV, this module is included as a package with the name org.opencv.core.

Image Processing

This module covers various image processing operations such as image filtering,
geometrical image transformations, color space conversion, histograms, etc. In the Java
library of OpenCV, this module is included as a package with the
name org.opencv.imgproc.

Video

This module covers the video analysis concepts such as motion estimation, background
subtraction, and object tracking. In the Java library of OpenCV, this module is included
as a package with the name org.opencv.video.

Video I/O

This module explains the video capturing and video codecs using OpenCV library. In the
Java library of OpenCV, this module is included as a package with the
name org.opencv.videoio.

calib3d

This module includes algorithms regarding basic multiple-view geometry algorithms,


single and stereo camera calibration, object pose estimation, stereo correspondence and
elements of 3D reconstruction. In the Java library of OpenCV, this module is included as
a package with the name org.opencv.calib3d.

10
features2d

This module includes the concepts of feature detection and description. In the Java library
of OpenCV, this module is included as a package with the name org.opencv.features2d.

Objdetect

This module includes the detection of objects and instances of the predefined classes such
as faces, eyes, mugs, people, cars, etc. In the Java library of OpenCV, this module is
included as a package with the name org.opencv.objdetect.

Highgui

This is an easy-to-use interface with simple UI capabilities. In the Java library of


OpenCV, the features of this module is included in two different packages
namely, org.opencv.imgcodecs and org.opencv.videoio.

OpenCV Haar Cascades

Haar cascades, first introduced by Viola and Jones in their seminal 2001
publication, Rapid Object Detection using a Boosted Cascade of Simple Features, are
arguably OpenCV‘s most popular object detection algorithm.

Sure, many algorithms are more accurate than Haar cascades (HOG + Linear SVM,
SSDs, Faster R-CNN, YOLO, to name a few), but they are still relevant and useful today.

One of the primary benefits of Haar cascades is that they are just so fast — it‘s hard to
beat their speed.

The downside to Haar cascades is that they tend to be prone to false-positive detections,
require parameter tuning when being applied for inference/detection, and just, in general,
are not as accurate as the more ―modern‖ algorithms we have today.

that said, Haar cascades are:

11
1. An important part of the computer vision and image processing literature

2. Still used with OpenCV

3. Still useful, particularly when working in resource-constrained devices when we


cannot afford to use more computationally expensive object detectors

In the remainder of this tutorial, you‘ll learn about Haar cascades, including how to use
them with OpenCV.

Haar cascades

The Haar cascade classifier is an effective way to detect various objects in the
surroundings. This method is also used in detection of face and eyes. The main objective
of the Haar cascade classifier is a collection a of a lot of positive images and negative
images which are later on to train the classifier.

Description. The cascade object detector uses the Viola-Jones algorithm to detect people's
faces, noses, eyes, mouth, or upper body. You can also use the Image Labeler to train a
custom classifier to use with this System object

12
First published by Paul Viola and Michael Jones in their 2001 paper, Rapid Object
Detection using a Boosted Cascade of Simple Features, this original work has become
one of the most cited papers in computer vision literature.

In their paper, Viola and Jones propose an algorithm that is capable of detecting objects in
images, regardless of their location and scale in an image. Furthermore, this algorithm can
run in real-time, making it possible to detect objects in video streams.

Specifically, Viola and Jones focus on detecting faces in images. Still, the framework can
be used to train detectors for arbitrary ―objects,‖ such as cars, buildings, kitchen utensils,
and even bananas.

While the Viola-Jones framework certainly opened the door to object detection, it is now
far surpassed by other methods, such as using Histogram of Oriented Gradients (HOG) +
Linear SVM and deep learning. We need to respect this algorithm and at least have a high-
level understanding of what‘s going on underneath the hood.

Recall when we discussed image and convolutions and how we slid a small matrix across
our image from left-to-right and top-to-bottom, computing an output value for each center
pixel of the kernel? Well, it turns out that this sliding window approach is also extremely
useful in the context of detecting objects in an image.

Haarcascade classifier

13
Haar Cascade used for detection

Haar Cascade is a machine learning-based approach where a lot of positive and


negative images are used to train the classifier.

 Positive images – These images contain the images which we want our
classifier to identify.
 Negative Images – Images of everything else, which do not contain the
object we want to detect.

BACKEND

Tufts-Face-Database

Tufts Face Database is the most comprehensive, large-scale (over 10,000 images,
74 females + 38 males, from more than 15 countries with an age range between 4 to 70
years old) face dataset that contains 7 image modalities: visible, near-infrared, thermal,
computerized sketch, LYTRO, recorded video, and 3D images. This webpage/dataset
contains the Tufts Face Database three-dimensional (3D) images. The other datasets are
made available through separate links by the user.

Cross-modality face recognition is an emerging topic due to the wide-spread


usage of different sensors in day-to-day life applications. The development of face
recognition systems relies greatly on existing databases for evaluation and obtaining
training examples for data-hungry machine learning algorithms. However, currently, there
is no publicly available face database that includes more than two modalities for the same
subject. In this work, we introduce the Tufts Face Database that includes images acquired
in various modalities: photograph images, thermal images, near infrared images, a
recorded video, a computerized facial sketch, and 3D images of each volunteer‘s face. An
Institutional Research Board protocol was obtained, and images were collected from
students, staff, faculty, and their family members at Tufts University.

This database will be available to researchers worldwide in order to benchmark


facial recognition algorithms for sketch, thermal, NIR, 3D face recognition and
heterogamous face recognition.
14
Links to modalities of the Tufts Face Database

1. Tufts Face Database Computerized Sketches (TD_CS)

2. Tufts Face Database Thermal (TD_IR) Around+Emotion

3. Tufts Face Database Thermal Cropped (TD_IR_Cropped) Emotion only

4. Tufts Face Database Three Dimensional (3D) (TD_3D)

5. Tufts Face Database Lytro (TD_LYT) (Check Note)

6. Tufts Face Database 2D RGB Around (TD_RGB_A) (Check Note)

7. Tufts Face Database 2D RGB Emotion (TD_RGB_E) (Check Note)

8. Tufts Face Database Night Vision (NIR) (TD_NIR) (Check Note)

9. Tufts Face Database Video (TD_VIDEO) (Check Note)

10. Tufts Face Thermal2RGB Dataset

Image Acquisition

Each participant was seated in front of a blue background in close proximity to the
camera. The cameras were mounted on tripods and the height of each camera was
adjusted manually to correspond to the image center. The distance to the participant was
strictly controlled during the acquisition process. A constant lighting condition was
maintained using diffused lights.

TD_CS:Computerized facial sketches were generated using software FACES 4.0


[1], one of the most widely used software packages by law enforcement agencies, the
FBI, and the US Military. The software allows researchers to choose a set of candidate
facial components from the database based on their observation or memory.

TD_3D: The images were captured using a quad camera (an array of 4 cameras).
Each individual was asked to look at a fixed view-point while the cameras were moved to
9 equidistant positions forming an approximate semi-circle around the individual. The 3D
models were reconstructed using open-source structure-from-motion algorithms.

15
TD_IR_E(E stands for expression/emotion): The images were captured using a
FLIR Vue Pro camera. Each participant was asked to pose with (1) a neutral expression,
(2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.

TD_IR_A (A stands for around): The images were captured using a FLIR Vue Pro
camera. Each participant was asked to look at a fixed view-point while the cameras were
moved to 9 equidistant positions forming an approximate semi-circle around the
participant .

TD_RGB_E: The images were captured using a NIKON D3100 camera. Each
participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed,
(4) exaggerated shocked expression, (5) sunglasses.

TD_RGB_A: The images were captured using a quad camera (an array of 4
visible field cameras). Each participant was asked to look at a fixed view-point while the
cameras were moved to 9 equidistant positions forming an approximate semi-circle
around the participant.

TD_NIR_A: The images were captured using a quad camera (an array of 4 night
vision cameras). The lighting condition for NIR imaging was maintained by using an
850nm Infrared 96 LED light system. Each participant was asked to look at a fixed view-
point while the cameras were moved to 9 equidistant positions forming an approximate
semi-circle around the participant.

TD_LYT_A: The images were captured using a LYTRO ILLUM 40 Megaray


Light Field camera. Each participant was asked to look at a fixed view-point while the
cameras were moved to 9 equidistant positions forming an approximate semi-circle
around the participant.

TD_VIDEO: The images were captured using one of the visible field quad
cameras. Each participant was asked to look at a fixed view-point while the camera was
moved around the participant forming an approximate semi-circle.

16
5.PROJECT DESCRIPTION

PROBLEM DEFINITION

An accurate and efficient eye detector is essential for many computer vision applications
.In this method it present an efficient method to evaluate the eye location from facial
images first a group of candidate regions with regional extreme points is quickly
proposed then a using haarcascads is adopted to determine the most likely eye region
and classify the region as left and right eyes,In this method was faster and adaptable to
variations of the image The Real Time Eye Detection Opencv python was developed
using python detection openCV, the eye tracking openCv python it‘s object detection
with haarcascades in order do recognition/detection with cascades files.

A eye detection using opencv python you will detecting a human eye with the features
mappers know haarcascads.the project will use the python language along with opencv
library for algorithm execution image processing respectively the haarcascads use in this
project pretrained and along with opencv library as haarcascades _eye_default .xml

Eye dedection has become an important research topic in computer vision and pattern
recognition , because the human eyes location are essential information for many
application including psychological analysis,facial expression recognition,auxiliary
driving,and medical diagonis. However,eye detection is quite challenging in many
practical appications.the cameras are sensitive to light variations and the shooting
distance which makes the human eyes very eccentric in facial images sometimes the face
is partially occluded image for example of half the face wash coverd in cover test for
detecting squint eyes in this case some exiting eye detection method do not work because
the relay on facial model detection to locate the eyes.A eye detector is also expected to
work well in various image modalities ,that is, infrared and visible images moreover, the
eye detection algorithm should by fast because it is supported to be online in many
practical cases.although many methods have been proposed todetect the eyes form
facial images,it is difficult to find one method that perform well it term
accuracy,roubtness,and efficiency.therefore ,the attempting to develop an efficient and
robust eye detection algorithm to fulfil the requirements of the application as much as
possible.
17
OVERVIEW OF THE PROJECT

Eye detecting a blink of human eye with the feature mappers knows as haar cascades.
Here in the project, we will use the python language along with the OpenCV library for
the algorithm execution and image processing respectively. The haar cascades we are
going to use in the project are pretrained and stored along with the OpenCV library as
haarcascade_frontalface_default.xml and haarcascade_eye_tree_eyeglasses.xml files. The
project develops a basic understanding of the systems such as driver drowsiness
detection, eye blink locks, eye detection, face detection and also the haar cascades library.
AboutHaarCascades:
Haar feature-based cascade classifiers is an effective object detection method proposed by
Paul Viola and Michael Jones in their paper, ―Rapid Object Detection using a Boosted
Cascade of Simple Features‖ in 2001. It is a machine learning-based approach where a
cascade function is trained from a lot of positive and negative images. Here positive
images are the samples which contain the target object and negative are the one.

Now, we extract the features from the given input image with the haar features
shown in the above image. They are just like convolutional kernels. Each feature is a
single value obtained by subtracting the sum of pixels under the white rectangle from the
sum of pixels under the black rectangle.

18
TheExcessivecalculation:
With all the possible sizes of the classifiers the features are calculated, but the amount of
computation it takes to calculate the features, a 24×24 window results over 160000
features. Also for each feature calculation, the sum of the pixels is also needed. To make
it computationally less expensive the creators of haar cascades introduced the integral
image, which means however large your image, it reduces the calculations for a given .
Thefalsefeatures
Now among the features that are calculated, most of the features are false and irrelevant.
Now the window which is applied to a region of the image may see a different region
which seems with the same features to the window but is not in reality. So, there is a need
to remove the false features which were done by the AdaBoost which helped select the
best features out of 160000+ features. Adaboost short form of Adaptive Boosting is a
Machine learning algorithm which was used for this sole task.

Algorithm

The frame is captured and converted to grayscale.


Bilateral Filtering is applied to remove impurities.
Face is detected with the haarcascade.
The ROI (Region Of Image) of Face is fed to eye detection part of algorithm.
Eyes are detected and resulting list is passed to if-else contruct.
If the length of list is more than two, means that the eyes are there.
Else the program is marked to be eye blinked and restarted.

19
MODULES DISCRIPTION

1. Importing OpenCV
2. Importing XML file
3. Allowing WebCam to capture video
4. Capturing video in terms of frames
5. Converting the image to greyscale
6. Detecting Multi-scale faces
7. Mentioning sides of the rectangle for face detection
8. Displaying the detected Video

STEP 1 – Importing OpenCV

 It can install and set up OpenCV to python using the command prompt, type (pip
install OpenCV-python). OpenCV provides a real-time optimized Computer
Vision library, tools, and hardware and the same will be used in our project.

 Type the below-mentioned command in the command prompt

Import numpy as np

Import cv2

Create a new directory and create a new file named ‗Main.py‘

In the python file import cv2

STEP 2– Importing XML file

Eye_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

eye_cascade = cv2.CascadeClassifier('haarcascade_eye_tree_eyeglasses.xml)

the cascades XML files

20
STEP 3 – Allowing WebCam to capture video
cv2.VideoCapture() allows the program to access the webcam only if the function has the
parameter 0. In other cases, if rather than using webCam one wishes to detect faces using
a video, then the parameter 0 in the function can be easily replaced with the video file
name.

#variable store execution state

first_read = True

#starting the video capture

cap = cv2.VideoCapture(0)

ret,img = cap.read()

STEP 4 – Capturing video in terms of frames


A video when detected in real-time is divided into frames for the face recognition
process

while(ret):

ret,img = cap.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

gray = cv2.bilateralFilter(gray,5,1,1)

STEP 5 – Converting the image to greyscale


The project that we built today works on greyscale images and hence each frame that is
been detected needs to be converted into greyscale before the face detection process
begins.

faces=face_cascade.detectmultiple1.3,5,minsize=(200,200))

STEP 6 – Detecting Multi-scale faces


Each Frame may or may not have the same scale i.e. size of the face detected and hence
the function detectMultiScale() allows the program to detect faces of multiple scales

detectMultiScale (InputArray image, double scaleFactor=1.1, int minNeighbors=6

21
STEP 7 – Mentioning sides of the rectangle for face detection
This function helps us to mention the dimensions thickness and color of the rectangle that
will be visible during the face detection.

cv2.rectangle(image, start_point, end_point, color, thickness)

STEP 8 – Displaying the detected frames in terms of real-time video


cv2.imshow('img',img)

a = cv2.waitKey(1)

22
6. SYSTEM DESIGN AND IMPLEMENMTATION

DATA FLOW DIAGRAM


A Data Flow Diagram (DFD) is a diagram that describes the flow of
data andthe processes that change or transform data throughout a system. It‘s a
structured analysis and design tool that can be used for flowcharting in place
of, or in association with, information oriented and process oriented system
flowcharts.

23
USE CASE DIAGRAM
A use case diagram is a way to summarize details of a system and the
users within that system. It is generally shown as a graphic depiction of
interactions among different elements in a system. Use case diagrams will
specify the events in a system and how those events flow, however, use case
diagram does not describe how those events are implemented.

24
UML DIAGRAM

25
7. TESTING

SYSTEM TESTING

Testing is the stage of implementation of which aimed at ensuring that


the systemworks accurately and efficiently before live operation commences.
Testing is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct the goal will be
achieved. The candidates system subject to a variety of tests. Online response,
volume, stress, recovery, security and usability tests. A series of testing are
performed for the proposed system before the system is ready for user
acceptance testing.

INTEGRATION TESTING

Testing is done for each module. After testing all the modules, the
modules are integrated and testing of the final system is done with the test data,
specially designed toshow that the system will operate successfully in all its
aspects conditions. Thus the system testing is a confirmation that all its correct
and an opportunity to show the user that the system works.

VALIDATION TESTING

The final step involves validation testing which determines whether


the softwarefunction as the user expected. The end-user rather than the system
developer conduct this test most software developers as a process called
―Alpha and Beta test‖ to uncover that only the end user seems able to find. The
compilation of the entire project is based on the full satisfaction of the end
users.

26
ACCEPTANCE TESTING

Acceptance testing can be defined in many ways, but a simple definition is the succeeds
when the software functions in a manner that can be reasonable expected by the customer. After
the acceptance test has been conducted, one of the two possible conditions exists. This is to fine
whether the inputs are accepted by the database or other validations. For example accept only
numbers in the numeric field, date format data in the date field. Also the null check for the not
null fields. If any error occurs then show the error messages. The function of performance
characteristics to specification and is accepted. A deviation from specification is uncovered and a
deficiency list is created.

WHITE BOX TESTING

This testing is also called as glass box testing. In this testing, by knowing the specified
function that a product has been designed to perform test can be conducted thatdemonstrates each
function is fully operation at the same time searching for errors in each function. It is a test case
design method that uses the control structure of the procedural design to derive test cases. Basis
path testing is a white box testing.

BLACK BOX TESTING

In this testing by knowing the internal operation of a product, tests can be conducted to
ensure that ― all gears mesh‖, that is the internal operation performs according to specification
and all internal components have been adequately exercised. It fundamentally focuses on the
functional requirements of the software.

27
IMPLEMENTATION

Implementation is one of the important tasks in project. Implementation is thephase in


which one has to be cautions, because all the efforts undertaken during the project will be fruitful
only if the software is properly implemented according to the plans made.
The implementation phase is less creative than system design. It is primarily concerned
with user training, site preparation and file conversion. When the manager'ssystem is linked on
remote sites, the telecommunication network and tests of the network along with the system are
also included under implementation
Depending upon the nature of the system, extensive user training may be required.
Programming itself is a design work. The initial parameters of the managementinformation system
should be modified as a result of programming provides a reality testfor the assumption made by
the analysis.

28
8. CONCLUSION
The project has been appriciated by all the users organisazation.it have shown that
possible for an un modified web camera to be used for eye detection if Further research
help us to achieve the specified goal and precausion,a useable treacking interface could be
implemented which required no special hardware or setupcost and cost are involves a
simple software tool.further more,it could be always eye detection.

29
9. FUTURE ENHANCEMENT
The python opencv eye detection to detect and track eye images with complex
background distinctive features of user eye are used opencv uses machine learning
algorthom search faces within a picture

 As the technology emerges,it is possible to upgrade the systrm and can be


adatable to destried environment
 It can use also applicable this to python and opencv
 It can also provide eye detection and eye tracking

39
10. APPENDIX

SOURCE CODE

import cv2

face_cascade=

cv2.CascadeClassifier("haarcascade_frontalface_default.xml")

eye_cascade = cv2.CascadeClassifier("haarcascade_eye_tree_eyeglasses.xml")

# capture frames from a camera

cap = cv2.VideoCapture(0)

# loop runs if capturing has been initialized.

while 1:

# reads frames from a camera

ret, img = cap.read()

# convert to gray scale of each frames

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Detects faces of different sizes in the input image

faces = face_cascade.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in faces:

# To draw a rectangle in a face

cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)

roi_gray = gray[y:y+h, x:x+w]

roi_color = img[y:y+h, x:x+w]

31
# Detects eyes of different sizes in the input image

eyes = eye_cascade.detectMultiScale(roi_gray)

#To draw a rectangle in eyes

for (ex,ey,ew,eh) in eyes:

cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,127,255),2)

# Display an image in a window

cv2.imshow('img',img)

# Wait for Esc key to stop

k = cv2.waitKey(5)

if k == 27:

break

# Close the window

cap.release()

# De-allocate any associated memory usage

cv2.destroyAllWindows()

<?xml version="1.0"?>

<!--

Stump-based 20x20 frontal eye detector.

Created by Shameem Hameed (http://umich.edu/~shameem)

32
////////////////////////////////////////////////////////////////////////////////////////

IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR


USING.

By downloading, copying, installing or using the software you agree to this license.

If you do not agree to this license, do not download, install,

copy or use the software.

Intel License Agreement

For Open Source Computer Vision Library

Copyright (C) 2000, Intel Corporation, all rights reserved.

Third party copyrights are property of their respective owners.

Redistribution and use in source and binary forms, with or without modification,

are permitted provided that the following conditions are met:

*Redistribution's of source code must retain the above copyright notice,

this list of conditions and the following disclaimer.

*Redistribution's in binary form must reproduce the above copyright notice,

this list of conditions and the following disclaimer in the documentation

and/or other materials provided with the distribution.

33
*The name of Intel Corporation may not be used to endorse or promote products

derived from this software without specific prior written permission.

This software is provided by the copyright holders and contributors "as is" and

any express or implied warranties, including, but not limited to, the implied

warranties of merchantability and fitness for a particular purpose are disclaimed.

In no event shall the Intel Corporation or contributors be liable for any direct,

indirect, incidental, special, exemplary, or consequential damages

(including, but not limited to, procurement of substitute goods or services;

loss of use, data, or profits; or business interruption) however caused

and on any theory of liability, whether in contract, strict liability,

or tort (including negligence or otherwise) arising in any way out of

the use of this software, even if advised of the possibility of such damage.

-->

<opencv_storage>

<cascade type_id="opencv-cascade-classifier"><stageType>BOOST</stageType>

<featureType>HAAR</featureType>

<height>20</height>

<width>20</width>

<stageParams>

<maxWeakCount>93</maxWeakCount></stageParams>

<featureParams>

<maxCatCount>0</maxCatCount></featureParams>

<stageNum>24</stageNum>

<stages>

<_>

<maxWeakCount>6</maxWeakCount>

34
<stageThreshold>-1.4562760591506958e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 0 1.2963959574699402e-01</internalNodes>

<leafValues>

-7.7304208278656006e-01 6.8350148200988770e-01</leafValues></_>

<_>

<internalNodes>

0 -1 1 -4.6326808631420135e-02</internalNodes>

<leafValues>

5.7352751493453979e-01 -4.9097689986228943e-01</leafValues></_>

<_>

<internalNodes>

0 -1 2 -1.6173090785741806e-02</internalNodes>

<leafValues>

6.0254341363906860e-01 -3.1610709428787231e-01</leafValues></_>

<_>

<internalNodes>

0 -1 3 -4.5828841626644135e-02</internalNodes>

<leafValues>

6.4177548885345459e-01 -1.5545040369033813e-01</leafValues></_>

<_>

<internalNodes>

0 -1 4 -5.3759619593620300e-02</internalNodes>

<leafValues>

5.4219317436218262e-01 -2.0480829477310181e-01</leafValues></_>

35
<_>

<internalNodes>

0 -1 5 3.4171190112829208e-02</internalNodes>

<leafValues>

-2.3388190567493439e-01 01</leafValues></_></weakClassifiers></_>

<_>

<maxWeakCount>12</maxWeakCount>

<stageThreshold>-1.2550230026245117e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 6 -2.1727620065212250e-01</internalNodes>

<leafValues>

7.1098899841308594e-01 -5.9360730648040771e-01</leafValues></_>

<_>

<internalNodes>

0 -1 7 1.2071969918906689e-02</internalNodes>

<leafValues>

-2.8240481019020081e-01 5.9013551473617554e-01</leafValues></_>

<_>

<internalNodes>

0 -1 8 -1.7854139208793640e-02</internalNodes>

<leafValues>

5.3137522935867310e-01 -2.2758960723876953e-01</leafValues></_>

<_>

<internalNodes>

36
0 -1 9 2.2333610802888870e-02</internalNodes>

<leafValues>

-1.7556099593639374e-01 6.3356137275695801e-01</leafValues></_>

<_>

<internalNodes>

0 -1 10 -9.1420017182826996e-02</internalNodes>

<leafValues>

6.1563092470169067e-01 -1.6899530589580536e-01</leafValues></_>

<_>

<internalNodes>

0 -1 11 2.8973650187253952e-02</internalNodes>

<leafValues>

-1.2250079959630966e-01 7.4401170015335083e-01</leafValues></_>

<_>

<internalNodes>

0 -1 12 7.8203463926911354e-03</internalNodes>

<leafValues>

1.6974370181560516e-01 -6.5441650152206421e-01</leafValues></_>

<_>

<internalNodes>

0 -1 13 2.0340489223599434e-02</internalNodes>

<leafValues>

-1.2556649744510651e-01 8.2710450887680054e-01</leafValues></_>

<_>

<internalNodes>

0 -1 14 -1.1926149949431419e-02</internalNodes>

<leafValues>

37
3.8605681061744690e-01 -2.0992340147495270e-01</leafValues></_>

<_>

<internalNodes>

0 -1 15 -9.7281101625412703e-04</internalNodes>

<leafValues>

-6.3761192560195923e-01 1.2952390313148499e-01</leafValues></_>

<_>

<internalNodes>

0 -1 16 1.8322050891583785e-05</internalNodes>

<leafValues>

-3.4631478786468506e-01 2.2924269735813141e-01</leafValues></_>

<_>

<internalNodes>

0 -1 17 -8.0854417756199837e-03</internalNodes>

<leafValues>

-6.3665801286697388e-01 01</leafValues></_></weakClassifiers></_>

<_>

<maxWeakCount>9</maxWeakCount>

<stageThreshold>-1.3728189468383789e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 18 -1.1812269687652588e-01</internalNodes>

<leafValues>

6.7844521999359131e-01 -5.0045782327651978e-01</leafValues></_>

<_>

<internalNodes>

38
0 -1 19 -3.4332759678363800e-02</internalNodes>

<leafValues>

6.7186361551284790e-01 -3.5744878649711609e-01</leafValues></_>

<_>

<internalNodes>

0 -1 20 -2.1530799567699432e-02</internalNodes>

<leafValues>

7.2220700979232788e-01 -1.8192419409751892e-01</leafValues></_>

<_>

<internalNodes>

0 -1 21 -2.1909970790147781e-02</internalNodes>

<leafValues>

6.6529387235641479e-01 -2.7510228753089905e-01</leafValues></_>

<_>

<internalNodes>

0 -1 22 -2.8713539242744446e-02</internalNodes>

<leafValues>

6.9955700635910034e-01 -1.9615580141544342e-01</leafValues></_>

<_>

<internalNodes>

0 -1 23 -1.1467480100691319e-02</internalNodes>

<leafValues>

5.9267348051071167e-01 -2.2097350656986237e-01</leafValues></_>

<_>

<internalNodes>

0 -1 24 -2.2611169144511223e-02</internalNodes>

<leafValues>

39
3.4483069181442261e-01 -3.8379558920860291e-01</leafValues></_>

<_>

<internalNodes>

0 -1 25 -1.9308089977130294e-03</internalNodes>

<leafValues>

-7.9445719718933105e-01 1.5628659725189209e-01</leafValues></_>

<_>

<internalNodes>

0 -1 26 5.6419910833938047e-05</internalNodes>

<leafValues>

-3.0896010994911194e-01 01</leafValues></_></weakClassifiers></_>

<_>

<maxWeakCount>16</maxWeakCount>

<stageThreshold>-1.2879480123519897e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 27 1.9886520504951477e-01</internalNodes>

<leafValues>

-5.2860701084136963e-01 3.5536721348762512e-01</leafValues></_>

<_>

<internalNodes>

0 -1 28 -3.6008939146995544e-02</internalNodes>

<leafValues>

4.2109689116477966e-01 -3.9348980784416199e-01</leafValues></_>

<_>

<internalNodes>

40
0 -1 29 -7.7569849789142609e-02</internalNodes>

<leafValues>

4.7991541028022766e-01 -2.5122168660163879e-01</leafValues></_>

<_>

<internalNodes>

0 -1 30 8.2630853285081685e-05</internalNodes>

<leafValues>

-3.8475489616394043e-01 3.1849220395088196e-01</leafValues></_>

<_>

<internalNodes>

0 -1 31 3.2773229759186506e-04</internalNodes>

<leafValues>

-2.6427319645881653e-01 3.2547241449356079e-01</leafValues></_>

<_>

<internalNodes>

0 -1 32 -1.8574850633740425e-02</internalNodes>

<leafValues>

4.6736589074134827e-01 -1.5067270398139954e-01</leafValues></_>

<_>

<internalNodes>

0 -1 33 -7.0008762122597545e-05</internalNodes>

<leafValues>

2.9313150048255920e-01 -2.5365099310874939e-01</leafValues></_>

<_>

<internalNodes>

0 -1 34 -1.8552130088210106e-02</internalNodes>

<leafValues>

41
4.6273660659790039e-01 -1.3148050010204315e-01</leafValues></_>

<_>

<internalNodes>

0 -1 35 -1.3030420057475567e-02</internalNodes>

<leafValues>

4.1627219319343567e-01 -1.7751489579677582e-01</leafValues></_>

<_>

<internalNodes>

0 -1 36 6.5694141085259616e-05</internalNodes>

<leafValues>

-2.8035101294517517e-01 2.6680740714073181e-01</leafValues></_>

<_>

<internalNodes>

0 -1 37 1.7005260451696813e-04</internalNodes>

<leafValues>

-2.7027249336242676e-01 2.3981650173664093e-01</leafValues></_>

<_>

<internalNodes>

0 -1 38 -3.3129199873656034e-03</internalNodes>

<leafValues>

4.4411438703536987e-01 -1.4428889751434326e-01</leafValues></_>

<_>

<internalNodes>

0 -1 39 1.7583490116521716e-03</internalNodes>

<leafValues>

-1.6126190125942230e-01 4.2940768599510193e-01</leafValues></_>

<_>

42
<internalNodes>

0 -1 40 -2.5194749236106873e-02</internalNodes>

<leafValues>

4.0687298774719238e-01 -1.8202580511569977e-01</leafValues></_>

<_>

<internalNodes>

0 -1 41 1.4031709870323539e-03</internalNodes>

<leafValues>

8.4759786725044250e-02 -8.0018568038940430e-01</leafValues></_>

<_>

<internalNodes>

0 -1 42 -7.3991729877889156e-03</internalNodes>

<leafValues>

5.5766099691390991e-01 - 01</leafValues></_></weakClassifiers></_>

<_>

<maxWeakCount>23</maxWeakCount>

<stageThreshold>-1.2179850339889526e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 43 -2.9943080618977547e-02</internalNodes>

<leafValues>

3.5810810327529907e-01 -3.8487631082534790e-01</leafValues></_>

<_>

<internalNodes>

0 -1 44 -1.2567380070686340e-01</internalNodes>

<leafValues>

43
3.9316931366920471e-01 -3.0012258887290955e-01</leafValues></_>

<_>

<internalNodes>

0 -1 45 5.3635272197425365e-03</internalNodes>

<leafValues>

-4.3908619880676270e-01 1.9257010519504547e-01</leafValues></_>

<_>

<internalNodes>

0 -1 46 -8.0971820279955864e-03</internalNodes>

<leafValues>

3.9906668663024902e-01 -2.3407870531082153e-01</leafValues></_>

<_>

<internalNodes>

0 -1 47 -1.6597909852862358e-02</internalNodes>

<leafValues>

4.2095288634300232e-01 -2.2674840688705444e-01</leafValues></_>

<_>

<internalNodes>

0 -1 48 -2.0199299324303865e-03</internalNodes>

<leafValues>

-7.4156731367111206e-01 1.2601189315319061e-01</leafValues></_>

<_>

<internalNodes>

0 -1 49 -1.5202340437099338e-03</internalNodes>

<leafValues>

-7.6154601573944092e-01 8.6373612284660339e-02</leafValues></_>

<_>

44
<internalNodes>

0 -1 50 -4.9663940444588661e-03</internalNodes>

<leafValues>

4.2182239890098572e-01 -1.7904919385910034e-01</leafValues></_>

<_>

<internalNodes>

0 -1 51 -1.9207600504159927e-02</internalNodes>

<leafValues>

4.6894899010658264e-01 -1.4378750324249268e-01</leafValues></_>

<_>

<internalNodes>

0 -1 52 -1.2222680263221264e-02</internalNodes>

<leafValues>

3.2842078804969788e-01 -2.1802149713039398e-01</leafValues></_>

<_>

<internalNodes>

0 -1 53 5.7548668235540390e-02</internalNodes>

<leafValues>

-3.6768808960914612e-01 2.4357110261917114e-01</leafValues></_>

<_>

<internalNodes>

0 -1 54 -9.5794079825282097e-03</internalNodes>

<leafValues>

-7.2245067358016968e-01 6.3664563000202179e-02</leafValues></_>

<_>

<internalNodes>

0 -1 55 -2.9545740690082312e-03</internalNodes>

45
<leafValues>

3.5846439003944397e-01 -1.6696329414844513e-01</leafValues></_>

<_>

<internalNodes>

0 -1 56 -4.2017991654574871e-03</internalNodes>

<leafValues>

3.9094808697700500e-01 -1.2041790038347244e-01</leafValues></_>

<_>

<internalNodes>

0 -1 57 -1.3624990358948708e-02</internalNodes>

<leafValues>

-5.8767718076705933e-01 8.8404729962348938e-02</leafValues></_>

<_>

<internalNodes>

0 -1 58 6.2853112467564642e-05</internalNodes>

<leafValues>

-2.6348459720611572e-01 2.1419279277324677e-01</leafValues></_>

<_>

<internalNodes>

0 -1 59 -2.6782939676195383e-03</internalNodes>

<leafValues>

-7.8390169143676758e-01 8.0526962876319885e-02</leafValues></_>

<_>

<internalNodes>

0 -1 60 -7.0597179234027863e-02</internalNodes>

<leafValues>

4.1469261050224304e-01 -1.3989959657192230e-01</leafValues></_>

46
<_>

<internalNodes>

0 -1 61 9.2093646526336670e-02</internalNodes>

<leafValues>

-1.3055180013179779e-01 5.0435781478881836e-01</leafValues></_>

<_>

<internalNodes>

0 -1 62 -8.8004386052489281e-03</internalNodes>

<leafValues>

3.6609750986099243e-01 -1.4036649465560913e-01</leafValues></_>

<_>

<internalNodes>

0 -1 63 7.5080977694597095e-05</internalNodes>

<leafValues>

-2.9704439640045166e-01 2.0702940225601196e-01</leafValues></_>

<_>

<internalNodes>

0 -1 64 -2.9870450962334871e-03</internalNodes>

<leafValues>

3.5615700483322144e-01 -1.5445969998836517e-01</leafValues></_>

<_>

<internalNodes>

0 -1 65 -2.6441509835422039e-03</internalNodes>

<leafValues>

-5.4353517293930054e-01 01</leafValues></_></weakClassifiers></_>

<_>

<maxWeakCount>27</maxWeakCount>

47
<stageThreshold>-1.2905240058898926e+00</stageThreshold>

<weakClassifiers>

<_>

<internalNodes>

0 -1 66 -4.7862470149993896e-02</internalNodes>

<leafValues>

4.1528239846229553e-01 -3.4185820817947388e-01</leafValues></_>

<_>S

<internalNodes>

0 -1 67 8.7350532412528992e-02</internalNodes>

<leafValues>

-3.8749781250953674e-01 2.4204200506210327e-01</leafValues></_>

<_>

<internalNodes>

0 -1 68 -1.6849499195814133e-02</internalNodes>

<leafValues>

5.3082478046417236e-01 -1.7282910645008087e-01</leafValues></_>

<_>

<internalNodes>

0 -1 69 -2.8870029374957085e-02</internalNodes>

<leafValues>

3.5843509435653687e-01 -2.2402590513229370e-01</leafValues></_>

<_>

<internalNodes>

0 -1 70 2.5679389946162701e-03</internalNodes>

<leafValues>

1.4990499615669250e-01 -6.5609407424926758e-01</leafValues></_>

48
<_>

<internalNodes>

0 -1 71 -2.4116659536957741e-02</internalNodes>

<leafValues>

5.5889678001403809e-01 -1.4810280501842499e-01</leafValues></_>

<_>

<

<internalNodes>

0 -1 218 -2.4550149682909250e-03</internalNodes>

<leafValues>

2.3330999910831451e-01 -1.3964480161666870e-01</leafValues></_>

<_>

<internalNodes>

0 -1 219 1.2721839593723416e-03</internalNodes>

<leafValues>

6.0480289161205292e-02 -4.9456089735031128e-01</leafValues></_>

<_>

<internalNodes>

0 -1 220 -4.8933159559965134e-03</internalNodes>

<leafValues>

-6.6833269596099854e-01 4.6218499541282654e-02</leafValues></_>

<_>

<internalNodes>

0 -1 221 2.6449989527463913e-02</internalNodes>

<leafValues>

-7.3235362768173218e-02 4.4425961375236511e-01</leafValues></_>

<_>

49
<internalNodes>

0 -1 222 -3.3706070389598608e-03</internalNodes>

<leafValues>

-4.2464339733123779e-01 6.8676561117172241e-02</leafValues></_>

<_>

<internalNodes>

0 -1 223 -2.9559480026364326e-03</internalNodes>

<leafValues>

1.6218039393424988e-01 -1.8222999572753906e-01</leafValues></_>

<_>

<internalNodes>

0 -1 224 3.0619909986853600e-02</internalNodes>

<leafValues>

-5.8643341064453125e-02 5.3263628482818604e-01</leafValues></_>

<_>

Values> -6.5215840935707092e-02 6.2109231948852539e-01</leafValues></_>

<_>

<internalNodes>

0 -1 350 -9.1709550470113754e-03</internalNodes>

<leafValues>

-7.5553297996520996e-01 5.2640449255704880e-02</leafValues></_>

<_>

<internalNodes>

0 -1 351 6.1552738770842552e-03</internalNodes>

<leafValues>

9.0939402580261230e-02 -4.4246131181716919e-01</leafValues></_>

50
<_>

<internalNodes>

0 -1 352 -1.0043520014733076e-03</internalNodes>

<le

<rects

<_>

14 4 3 15 -1.</_>

<_>

15 4 1 15 3.</_></rects></_>

<_>

<rects>

<_>

19 13 1 2 -1.</_>

<_>

19 14 1 1 2.</_></rects></_>

<_>

<rects>

<_>

2 6 5 8 -1.</_>

<_>

2 10 5 4 2.</_></rects></_></features></cascade>

</opencv_storage>

51
SCREEN SHOTS

Opencv identify the faces

Face detecting

52
Haar cascades

Eyes casecades features

53
Truft facedatabase gathering

54
Opencv eyes detection

55
11. REFERENCES

1. Patrick Naughton and Herbert Shildt , ― The complete reference

2. python2.0.Joseph L. weber, ‗ Using python2.0 ‗ , PHI Publishers.


3. Pages Book by Wrox Publications
4. SQL Complete Reference by Liviol.
5. https://github.com/mjrovai/opencv-face-recognition
6. https://www.geeksforgeeks.org/python-har-cascades-for-object -detection
7. http://itssourcecode.com/free-project/python-project/real-time-eye -
detection-opencv-pyhon
8. http://datamahadev.com/face-detection-using-a-real-time-webcam-and-
implementation-with haar-casecades -algorithm.

56

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy