0% found this document useful (0 votes)
64 views23 pages

Estallo Thesis Final Defense

This document discusses the development of a driver monitoring system using image processing to detect when a driver's eyes are not on the road. It aims to serve as an alert system to help prevent vehicle accidents caused by driver inattention. The system will use an infrared camera to continuously track the driver's facial features and eye movement to monitor eye position and alert the driver if their eyes are not on the road. The document reviews literature on car accidents and their causes, image processing techniques for face and eye detection, and the use of Haar classifiers for eye detection. The overall goal is to create a system that can help reduce accidents by detecting driver inattention and alerting the driver.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views23 pages

Estallo Thesis Final Defense

This document discusses the development of a driver monitoring system using image processing to detect when a driver's eyes are not on the road. It aims to serve as an alert system to help prevent vehicle accidents caused by driver inattention. The system will use an infrared camera to continuously track the driver's facial features and eye movement to monitor eye position and alert the driver if their eyes are not on the road. The document reviews literature on car accidents and their causes, image processing techniques for face and eye detection, and the use of Haar classifiers for eye detection. The overall goal is to create a system that can help reduce accidents by detecting driver inattention and alerting the driver.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

DriverEye-On-The-Road

Monitoring& Alarm System

BY:

JOHNDALE S. ESTALLO

A Thesis Presented to the faculty

of the College of Engineering and Architecture

in Partial fulfillment of requirements of Degree

Bachelor of Science in Computer Engineering

University of Nueva Caceres

October 2019
CHAPTER 1

Introduction

Safe operation of a motor vehicle requires that a driver focus a substantial

portion of his or her attention resources on driving-related tasks, including

monitoring the roadway, anticipating the actions of other drivers, and controlling

the vehicle (Eby & Kostyniuk, 2003). The National Highway Traffic Safety

Administration (NHTSA) has estimated that driver inattention is a causative factor

in 25–30 percent of police-reported traffic crashes in the United States,

approximately 1.2 million crashes per year (Stutts & Hunter, 2003). Driver

distraction is conceptualized as one mechanism of driver inattention defined as

“the diversion of attention away from activities critical for safe driving toward a

competing activity, which may result in insufficient or no attention to activities

critical for safe driving.

The development of technologies and for preventing or detecting driver

inattention while driving is a major challenge in the field of accident prevention

system (Gupta & Garima, 2014). In the study of ( Fletcher & Zelinsky, 2009), Eye

gazed-road event was used to detect driver mindlessness. Estimate the driver’s

observations in real-time within the vehicle. Through the integration of driver

eye-gaze tracking and road scene event detection the driver behavior can be

validated.According to (Gupta & Garima, 2014), enable to visualize driver

mindlessness or rather driver inattention,avisual based system is used to locate

eyes and mouth and determines the driver’s drowsiness level through horizontal

average intensities of the eyes and mouth region in face. Moreover, another
study was conducted, driver’s visual and inattention was tracked of fusing

stereo vision and lane tracking data, running both rule-based and support-

vector machine (SVM) classification methods.

The problem that this study wants to address is that driver’s inattention can

affect vehicle driving and might lead into an accident. The study simply

proposed a system that serves as an alert device for the drivers.

The goal of this study is to develop a system which can serves as an alert

device for the drivers through help of image processing. Having a system like this

may help to avoid, reduces, and prevent vehicle accidents. With some

modifications, this study can also be implemented for system like checking

student’s activeness in the classroom, examination in the school and specially

on work to have more productive, focus and become more determined. This

study may also be used as a reference for further development ofeye detection

and might serve as a reference material for future use and studies.

This study is limited to monitor and detection of the eye not directed on

the road infront. This study can be used for both night and day driving. This

system will use an infrared camera to continuously track the facial landmark and

movement of the eyes of the driver. Images are captured using the camera at

an estimated fix rate.


Chapter 2

Review of Related Literature

2.1 Car Accidents

Road accidents are one of the major issues in human induced disasters in

the world as the car crashes are the leading road killer(Haulle & Kisiri,

2016).Worldwide, an estimated 1.2 million people are killed in road crashes each

year and as many as 50 million are injured. Projections indicate that these figures

will increase by about 65% over the next 20 years unless there is new

commitment to prevention. Nevertheless, the tragedy behind these figures

attracts less mass media attention than other, less frequent types of

tragedy(WHO, 2004).

Figure 2.1 Deaths of road traffic accidents (1995-2010)


The Traffic Statistics at the Ministry of Public Safety, and Traffic

Management as show in figure 2.1, showed an increase in the number of deaths

in Libya, especially in recent years( Ismail & Yahia, 2011)

2.1.1 Causes of Car Accidents

In developing countries and especially Arab countries without exception,

the problem of traffic accidents is very serious and it's become more difficult to

control. It has become one of the most important that damage or destroy

material resources such as vehicle, lamp, post, adjacent building, bridges, and

human life( Ismail & Yahia, 2011).

Figure 2.2 Cause of Traffic accidents

The important elements in traffic accidents as shown in figure 2.2 is driver

(the human element) and the road and the vehicle so knowledge of driver

behavior, attention to the road and the vehicle is important in the reduction of

traffic accidents in developing countries( Ismail & Yahia, 2011).


2.1.1.1 Human Factors

Driver-related behavioral factors and human errors dominate the

causation of traffic crashes. Driving behaviors and styles are influenced by

external and driver-specific factors. Individual and societal characteristics which

influence driving behavior in a way which can affect the chances of crash

occurrence collectively constitute human factors in traffic safety.Thus,

behavioral factors can be distinguished as those that reduce capability on a

long-term basis (inexperience, aging, disease and disability, alcoholism, drug

abuse),those that reduce capability on a short-term basis (drowsiness, fatigue,

acute alcohol intoxication, short term drug effects, binge eating, acute

psychological stress, temporary distraction), those that promote risk taking

behavior with long-term impact (overestimation of capabilities, macho attitude,

habitual speeding, habitual disregard of traffic regulations, indecent driving

behavior, non-use of seat belt or helmet, inappropriate sitting while driving,

accident proneness) and those that promote risk taking behavior with short-term

impact (moderate ethanol intake, psychotropic drugs, motor vehicle crime,

suicidal behavior, compulsive acts). The classification aims to assist in the

conceptualization of the problem that may also contribute to behavior

modification-based efforts.

2.2 Image Processing

Digital Image Processing is the processing of images which are digital in

nature by a digital computer. Image processing is a method to perform some

operations on an image, in order to get an enhanced image or to extract some


useful information from it. It is a type of signal processing in which input is an

image and output may be image or characteristics/features associated with

that image(Jain, Hong, Pankanti, & Bolle, 2004).

2.2.1 Face Detection

Automatic face detection is the cornerstone of all applications revolving

around automatic facial image analysis including, but not limited to, face

recognition and verification, face tracking for surveillance, facial behavior

analysis, facial attribute recognition and assessment of beauty, face relighting

and morphing, facial shape reconstruction, image and video retrieval, as well

asorganization and presentation of digital photo-albums. Face detection is also

the initial step to all modern vision-based human-computer and human-robot

interaction systems ( Zhao, Rong, & Zhang, Study of the Effects of Alcohol on

Drivers and Driving Performance on Straight Road, 2014).

Furthermore, the majority of the commercial digital cameras have an

embedded face detector that is used to help auto-focusing. Finally, many social

networks, such as Facebook, use face detection mechanisms for the purpose of

image/person tagging ( Zhao, Rong, & Zhang, Study of the Effects of Alcohol on

Drivers and Driving Performance on Straight Road, 2014).

2.2.2 Eye Detection

Eye detection is a crucial aspect in many useful applications ranging

from face recognition/detection to human computer interface, driver behavior

analysis, or compression techniques like MPEG4 (T. D’Orazio, G.Cicirelli, &

Distante, 2014). Eye detection is also used in person identification by iris


matching. Only those image regions that contain possible eye pairs will be fed

into a subsequent face verification system. Localization of eyes is also a

necessary step for many face classification methods(Rajpathak, Kumar, &

Schwartz, 2009).

2.2.2.1 Eye Detection using Haar Cascades

Haar like features are rectangular patterns in data. A cascade is a series

of “Haar-like features” that are combined to form a classifier. A Haar wavelet is

a mathematical function that produces square wave output (Viola & Jones,

2004). Haar features are composed of either two or three rectangles. Face

candidates are scanned and searched for Haar features of the current stage.

The weight and size of each feature and the features themselves are

generated by the learning algorithm AdaBoost. Each Haar feature has a value

that is calculated by taking the area of each rectangle, multiplying each by

their respective weights, and then summing the results. The area of each

rectangle is easily found using the integral image. The coordinate of the any

corner of a rectangle can be used of a rectangle, the area can be computed

quickly (Sri, Divya, Vyshnav, & Tejaswini, 2018).

Figure 2.5 Haar like Features


The size and position of a pattern’s support can vary provided its black

and white rectangles have the same dimension, border each other and keep

their relative positions (Wang, 2014).

2.2.2.1.1 Haar Classifier

A Haar classifier uses the rectangle integral to calculate the value of a

Haar feature. The Haar classifier multiplies the weight of each rectangle by its

area and the results are added together. Several Haar classifiers compose a

stage. A stage accumulator sums all the Haar classifier results in a stage and a

stage comparator compares this summation with a stage threshold. The

threshold is also a constant obtained from the AdaBoost algorithm. Each stage

does not have a set number of Haar features. Depending on the parameters of

the training data individual stages can have a varying number of Haar features

(Sri, Divya, Vyshnav, & Tejaswini, 2018).

2.2.2.1.2 Cascade

Figure 2.6 Cascade Classifier

The cascade eliminates candidates by making stricter requirements in

each stage with later stages being much more difficult for a candidate to pass.

Candidates exit the cascade if they pass all stages or fail any stage. A face is
detected if a candidate passes all stage (Sri, Divya, Vyshnav, & Tejaswini, 2018).

2.2.2.1.3 Integral Image

Integral image is introduced as a new image representation by Paul Viola

and Michael Jones. Integral image is used to accelerate speed of computing

the features used by object detector and this image presentation is proved to

work very quickly in object detection (Huang, Lin, & Long, 2009). The integral

image at location (x, y) contains the sum of the pixels above and to the left of x

Figure 2.7 Illustration for calculating a mask value using integral images. The
coordinate origin is in the upper left corner

and y. It is the summation of all the pixel values in an original image (Putta,

Shinde, & Lohani, 2014).

2.2.2.2 Template matching Method

Template matching is one of the most typical techniques for feature

extraction. Correlation is commonly exploited to measure the similarity between

a stored template and the window image under consideration. Templates

should be deliberately designed to cover variety of possible image variations.

During the search in the whole image, scale and rotation should also be

considered carefully to speed up the process. To increase the robustness of the

tracking scheme the method automatically generates a codebook of images


representing the encountered different appearances of the eyes (Dhruw &

Tigga, 2009).

2.2.2.3 Projection Function

By combining an integral projection function, which considers mean of

intensity, and a variance projection function, which considers the variance of

intensity, the hybrid function better captures the vertical variation in intensity of

the eyes. Kumar suggest a technique in which possible eye areas are localized

using a simple thresholding in color space followed by a connectedcomponent

analysis to quantify spatially connected regions and further reduce the search

space to determine the contending eye pair windows (Dhruw & Tigga, 2009).

2.2.2.4 IR Method

The most common approach employed to achieve eye detection in real-

time is by using infrared lighting to capture the physiological properties of eyes

and an appearance-based model to represent the eye patterns. The

appearance-based approach detects eyes based on the intensity distribution of

the eyes by exploiting the differences in appearance of eyes from the rest of the

face. This method requires a significant number of training data to enumerate all

possible appearances of eyes i.e. representing the eyes of different subjects,

under different face orientations, and different illumination conditions. The

collected data is used to train a classifier such as a neural net or support vector

machine to achieve detection (Dhruw & Tigga, 2009).

2.2.2.5 Modelling

This is the simple style efficient method for eye detection. We modeled the

human eye as a circle circumscribed in an ellipse, where circle represents the iris
of human eye and the ellipse represents the eye lashes. Hough Transform can be

used for the detection of a fore’s aid circle and ellipse then final eye is detected

by neglecting the wrong detections and ruling out a pair of eyes based on

geometrical considerations (Dhruw & Tigga, 2009).

2.3 Python Programming

Python is an open source, high-level programming language developed

by Guido van Rossum in the late 1980s and presently administered by Python

Software Foundation. It came from the ABC language that he helped create

early on in his career. It is a high-level language. Reading and writing codes in

Python is much like reading and writing regular English statements. Because they

are not written in machine-readable language, Python programs need to be

processed before machines can run them. Python is an object-oriented

language that allows users to manage and control data structures or objects to

create and run programs. Everything in Python is, in fact, first class. All objects,

data types,functions, methods, and classes take equal position in Python

(Johansen, 2016).

Figure 2.7 Example of a Python programming


2.4 Open CV

Open CV is an open source computer vision library. The library is written in

C and C++ and runs under Linux, Windows and Mac OS X. There is active

development on interfaces for Python, Ruby, Matlab, and other languages.

Open CV was designed for computational efficiency and with a strong focus on

realtime applications. Open CV is written in optimized C and can take

advantage of multicore processors. If you desire further automatic optimization

on Intel architectures, you can buy Intel’s Integrated Performance Primitives (IPP)

libraries, which consist of low-level optimized routines in many different

algorithmic areas. Open CV automatically uses the appropriate IPP library at

runtime if that library is installed (Bradski & Kaehler, 2008).

Figure 2.7Logo for Open CV

2.5 Dlib

Dlib-ml is a cross platform open source software library written in the C++

programming language. Its design is heavily influenced by ideas from design by

contract and component-based software engineering. This means it is first and

foremost a collection of independent software components, each

accompanied by extensive documentation and thorough debugging modes.


Moreover, the library is intended to be useful in both research and real world

commercial projects and has been carefully designed to make it easy to

integrate into a user’s C++ application (King, 2009)

Figure 2.8 Example of a Facial Landmark using Dlib


Chapter 3

Methodology

3.1 Conceptual Framework of the Study

In performing this study, a basic conceptual framework was constructed

to map out the required for prototyping the device. Methods and concepts used

to develop the system are presented. This study will follow through this

conceptual framework.

Figure 3.1 Proposed System Conceptual Framewor

Figure 3.1 shown above the conceptual framework diagram of the

system. The researcher uses the input-process-output (IPO) model in developing

the proposed system since it offers an efficient way to both analyze and

document the critical aspects of a transformation process. The framework

started with data gathering from different studies to develop and implement a

system to create a newly method or solution. A system was developed and

intended to address the issue in the gather information. The system procedure is
discussed in the succeeding paragraphs. The last procedure was to test the

developed system and break down the gathered information.

3.1.1 Input Phase

From the conceptual framework shown (Figure 3.1), the first phase is the

input phase. This phase contains the data gathering, software requirements, and

hardware requirements. The inputs from gathered data, software and hardware

requirements are met to make the study conceivable.

3.1.1.1 Data Gathering

Data and information gathering are the first part in the developing

process of driver eye on the road monitoring and alarm system. This data

enables the researcher to address the problem and evaluates it well. This correct

information will support the researcher to overcome and make the study well

defined and understood. From this data gathered, the researcher can

determine the software and hardware requirements of the develop system.

Information like how much time the driver loses his attention on the road

while driving that can possibly affect the control of the vehicle. Obtaining data

on using haar cascading algorithm to monitor the driver’s eye and detects the

driver’s inattention. This data will be used as a reference for development of the

proponent system.

3.1.1.2 Software and Hardware Requirements

To perform image processing, the proponent intent to use Open CV and

dlib open source libraries

3.1.2 System Design


In this phase, system design shows by the researcher the processes of the

system. From planning, development, to testing, and analysis. In this part, will

showed what the system look like, how system will be function and work. The

system design phase is made to apply all gathered data and transfigure it into a

working system.

The proposed system is composed of raspberry pi and a camera. During a

ride, the camera will capture and monitor the driver’s eye movement. It is

connected to the raspberry pi that will serves to process the image through the

algorithm. When driver’s inattention captured by the camera and the system

detects it, the system will give an alarm for the driver to get his attention back on

the road. Speedometer is associated with raspberry pi and the raspberry pi will

check the speed of the vehicle and process it to have a period interval for each

speed of the vehicle.


Figure 3.3 Use Case Diagram of the Proposed System
In the figure 3.3, the person as the driver will start the vehicle for him to

begin driving the vehicle as well as the system will begin likewise after the vehicle

Figure 3.4 Flowchart of the Proposed System

system began. The camera that is associated with the raspberry pi as the system

will begin to follow the eye and monitor it. At the point when the system

distinguishes the driver mindlessness as the camera observed it, caution will have

activated and turns on for the driver to get his attention back on the road.

Figure 3.4 showed above the flowchart of the proposed system. It will start

at video acquisition of the captured video. Next dividing into frames, this is used

take live video as its input and convert it into a series of frames/images, which

are then processed. Followed by the face detection from making of predefined

haar cascade samples. Then next is the eye detection tries to detect the driver’s
eye. After the eye detection detects the driver’s eye and monitor it until it

detects the driver inattention by checking the state of the eye. Then it will give

an alarm, it will correspond according to given speed of the vehicle that are

connected to the system.

3.1.3 Programming

The development of the proposed system includes the programming of

the software. The design of the program will be based from the requirements

obtained during the system design, and the flow chart presented in figure 3.4.

3.1.4 Testing
References
Ismail, A., & Yahia, H. A. (2011). CAUSES AND EFFECTS OF ROAD TRAFFIC ACCIDENTS IN TRIPOLI -
LIBYA. Proceeding the 6th Civil Engineering Conference in Asia Region: Embracing the
Future through, 17.

Zhao, X., Rong, J., & Zhang, X. (2014). Study of the Effects of Alcohol on Drivers and Driving
Performance on Straight Road. Creative Commons Attribution License, 19.

Bradski, G., & Kaehler, A. (2008). Learning OpenCV. Sebastopol: O’Reilly Media, Inc.

Dhruw, K. K., & Tigga, A. K. (2009). EYE DETECTION USING VARIENTS OF HOUGH TRANSFORM
And OFF-LINE SIGNATURE VERIFICATION. Bachelor of Technology In Electronics &
Instrumentation Engineering, 1-98.

Eby, D. W., & Kostyniuk, L. P. (2003, May). DRIVER DISTRACTION AND CRASHES:. UMTRI, pp. 1-
32.

Gupta, D. S., & Garima, E. (2014, July). Road Accident Prevention System Using Driver's
Drowsiness Detection by Combining Eye Closure and Yawning. International Journal of
Research (IJR), Volume 1(Issue 6), 5.

Haulle, E., & Kisiri, M. (2016, November). he Impact of Road Accidents to the Community of
Iringa Municipality: Challenges in Reducing Risks. International and Multidisciplinary
Journal of Social Sciences, vol-5, 253-280.

Jain, A. K., Hong, L., Pankanti, S., & Bolle, R. (2004). Biometric Identification.

Johansen, A. (2016). The Ultimate Beginner's Guide.


King, D. (2009). Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research 10
(2009) 1755-1758, 1-4.

Lee, J. W., Kim, K. W., Hong, H. G., & Park, K. R. (2017). A Survey on Banknote Recognition
Methods by Various Sensors. Sensors, 17(2), 313.

Ministry of Roads Transport and Highways. (2013). Common Causes of Road Accidents. India:
Ministry of Road Transport & Highways, Government of India.

National Center for Statistics and Analysis. (2019, April). Distracted driving in fatal crashes, 2017.
Traffic Safety Facts Research Note Report No. DOT HS 812 700, 15.

National Highway Traffic Safety Administration. (2015). Research on Drowsy Driving.


Washington, DC: U.S. DEPARTMENT OF TRANSPORTATION.

National Transportation Safety Board. (2017). Reducing Speeding-Related Crashes Involving


Passenger Vehicles. Safety Study NTSB/SS-17/01, 1-74.

Neelima, N., Lakshmi, S., & Vardhan, T. (2013, November ). Design and Development of Warning
System for Drowsy Drivers. International Journal of Scientific and Research Publications,
Volume 3(Issue 11), 5.

NHTSA. (2006, April). The 100-Car Naturalistic Driving Study Phase II – Results of the 100-Car
Field Experiment. DOT HS 810 593, 1-344.

Nissan North America. (2016). Driver Attention Alert (DAA). USA: Nissan News USA.

Rajpathak, T., Kumar, R., & Schwartz, E. (2009). Eye Detection Using Morphological and Color.
2009 Florida Conference on Recent Advances in Robotics (pp. 1-6). Machine Intelligence
Laboratory, Department of Electrical and Computer Engineering.

Sehgal, T., Maindalkar, S., & More, S. (2016, September). Safety Device for Drowsy Driving using
IOT. International Journal of Advanced Research in Computer and Communication
Engineering, Vol. 5(Issue 9), 3.

Sri, M., Divya, P., Vyshnav, J., & Tejaswini, B. (2018). DETECTION OF DROWSY EYES USING VIOLA
JONES FACE. INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC
RESEARCH (IJCESR, 1-5.

Stutts, J. C., & Hunter, W. W. (2003, July). Driver Inattention, Driver Distraction and Traffic
Crashes. ITE Journal, 1-6.

T. D’Orazio, M. L., G.Cicirelli, & Distante, A. (2014). An Algorithm for real time eye detection in
face images. Proceedings of the 17th International Conference on Pattern Recognition
(pp. 1-4). nstitute of Intelligent Systems for AutomationVia Amendola 122/D-I 70126
Bari (Italy).
The Royal Society for the Prevention of Accidents. (2017). Road Safety Factsheet: Driver
Distraction. Rospa, 1-7.

Viola, P., & Jones, M. J. (2004). Robust real-time face detection. Int. J. Comput. Vis., 137-154.

Volvo Car Corporation. (2007, August). Volvo Cars introduces new systems for alerting tired and
distracted drivers. Global Newsroom.

Wang, Y. Q. (2014). An Analysis of the Viola-jones Face Detection Algorithm. Image Process Line,
128-148.

Wheaton, A., Chapman, D., Presley-Cantrell, L., Croft, J., & Roehler, D. (2013). Drowsy driving –
19 states and the District of Columbia, 2009-2010. Columbia: MMWR Morb Mortal
Weekly Report.

WHO. (2004). Road safety - Speed. WHo: WHO.

WHO. (2004). World report on road traffic injury prevention: Summary. Geneva: WHO Library
Cataloguing-in-Publication Data.

Zhao, X., Rong, J., & Zhang, X. (n.d.).

Components list

Logitech C920 1 ₱2,350.00

1 P2, 460.00

Raspberry Pi 3

Traffic Hatforsound P 433.179

effects Raspberry Pi

Laptop 1 Free

Total ₱5,243.179

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy