0% found this document useful (0 votes)
84 views60 pages

Project Book (E-ATM)

This document describes a student project that aims to enhance security for automated teller machines (ATMs) by introducing facial recognition technology. A group of five students from Dhaka International University submitted the project to the Department of Computer Science and Engineering in partial fulfillment of their Bachelor of Science degrees. The project implements facial recognition as an additional security layer for ATMs beyond existing physical and technological measures to help prevent fraud like stolen cards, fake cards, and card cloning. It uses algorithms like Histogram of Oriented Gradients (HOG) and a Deep Convolutional Neural Network (CNN) to analyze facial features and Support Vector Machine (SVM) for facial classification and identification. A prototype was developed using a Raspberry Pi minico

Uploaded by

Rejaul Karim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views60 pages

Project Book (E-ATM)

This document describes a student project that aims to enhance security for automated teller machines (ATMs) by introducing facial recognition technology. A group of five students from Dhaka International University submitted the project to the Department of Computer Science and Engineering in partial fulfillment of their Bachelor of Science degrees. The project implements facial recognition as an additional security layer for ATMs beyond existing physical and technological measures to help prevent fraud like stolen cards, fake cards, and card cloning. It uses algorithms like Histogram of Oriented Gradients (HOG) and a Deep Convolutional Neural Network (CNN) to analyze facial features and Support Vector Machine (SVM) for facial classification and identification. A prototype was developed using a Raspberry Pi minico

Uploaded by

Rejaul Karim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Enhance Automated Teller Machine

(ATM) Security by Introducing Facial


Recognition
This project is submitted to the Department of Computer Science and
Engineering, Dhaka International University, in partial fulfillment to the
requirements of Bachelor of Science (B.Sc.) in Computer Science and
Engineering (CSE).
Submitted by

NAME REG. NO ROLL NO


Md. Arifuzzaman Tushar CS-E-59-16-103966 39
Md. Afif Ahsan CS-E-59-16-103677 05
Rukaiya Farzana CS-E-59-16-104006 45
Subarna Akter CS-E-59-16-103787 17
Mitali Akter CS-E-59-16-103692 08

Batch: 59th (2nd shift), Session: 2016-2017

Supervised by
Rafid Mostafiz
Lecturer

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


FACULTY OF SCIENCE AND ENGINEERING
DHAKA INTERNATIONAL UNIVERSITY
DHAKA, BANGLADESH
SEPTEMBER-2020
Department of Computer Science & Engineering
Dhaka International University, Dhaka-1205

Supervisor’s Statement:
This is to certify that the project paper entitled as “Enhance Automated
Teller Machine (ATM) Security by Introducing Facial Recognition”
submitted by Md. Arifuzzaman Tushar, Roll: 39; Md. Afif Ahsan, Roll: 05;
Rukaiya Farzana, Roll: 45; Subarna Akter, Roll: 17; Mitali Akter, Roll: 08;
has been carried out under my supervision. This project has been prepared in
partial fulfillment of the requirement for the Degree of Bachelor of Science
(B.Sc.) in Computer Science & Engineering, Department of Computer
Science & Engineering, Dhaka International University, Dhaka, Bangladesh.

Supervisor’s Signature

Date: ………………… ...........…...………………………


Rafid Mostafiz
Lecturer
Dept. of Computer Science & Engineering
Dhaka International University

II
APPROVAL
The project report as “Enhance Automated Teller Machine (ATM)
Security by Introducing Facial Recognition” submitted by Md.
Arifuzzaman Tushar, Md. Afif Ahsan, Rukaiya Farzana, Subarna Akter, and
Mitali Akter to the Department of Computer Science & Engineering, Dhaka
International University, has been accepted as satisfactory for the partial
fulfillment of the requirements for the degree of B.Sc. in Computer Science
and Engineering and approved as to its style and contents.

Board of Honorable Examiners’

1. Chairman ..........................................................................
Prof. Dr. A. T. M. Mahbubur Rahman
Dean,
Faculty of Science and Engineering,
and Chairman,
Dept. of Computer Science and Engineering,
Dhaka International University

2. Member ......................................................................
Associate Prof. Md. Abdul Based
Chairman,
Dept. of Electrical, Electronics &
Telecommunication Engineering,
Dhaka International University

3. Supervisor and Member .....................................................................


Rafid Mostafiz
Lecturer
Dept. of Computer Science and Engineering
Dhaka International University

4. External Member ..........................................................


Prof. Dr. Hafiz Md. Hasan Babu
Ex-Chairman, Dept. of Computer Science and
Engineering, University of Dhaka,
Pro-Vice Chancellor, National University of
Bangladesh

III
DECLARATION
We hereby, declare that the work presented in this project has been carried out
by us under the supervision of Rafid Mostafiz, Lecturer, Department of
Computer Science and Engineering, Dhaka International University, Dhaka,
Bangladesh for the purpose of fulfillment of the requirements for the Bachelor
of Science (B.Sc.) in Computer Science & Engineering. Also declared that the
work presented in this project has not been submitted elsewhere for the award
of any degree.
Authors

............................................... ......................................
Md. Arifuzzaman Tushar Md. Afif Ahsan
B.Sc. in CSE, Roll: 39 B.Sc. in CSE, Roll: 05
Reg. No: CS-E-59-16-103966 Reg. No: CS-E-59-16-103677
Batch: 59th (2nd shift), Session: 2016-17 Batch: 59th (2nd shift), Session: 2016-17
Dhaka International University Dhaka International University

...................................... ..................................... ......................................


Rukaiya Farzana Subarna Akter Mitali Akter
B.Sc. in CSE, Roll: 45 B.Sc. in CSE, Roll: 17 B.Sc. in CSE, Roll: 08
Reg. No: CS-E-59-16-104006 Reg. No: CS-E-59-16-103787 Reg. No: CS-E-59-16-103692
Batch: 59th (2nd shift), Session: 2016-17 Batch: 59th (2nd shift), Session: 2016-17 Batch: 59th (2nd shift), Session: 2016-17
Dhaka International University Dhaka International University Dhaka International University

Supervisor’s Signature

Date: ………………… ...........…...………………………


Rafid Mostafiz
Lecturer
Dept. of Computer Science & Engineering
Dhaka International University

IV
ABSTRACT

Over the ATM playing a significant role in banking transactions as it can serve
24 hours throughout the year without any break. However, existing ATM
faces several scams (stolen cards, fake cards, card cloning, skimming, etc.)
even though it has physical and technological security measures. This project
is aimed to use new technology-based solution to protect the ATM system
from those scams. This project implemented facial recognition as a new layer
of security with all other existing security. The human face is being identified
using the Histogram of Oriented Gradients (HOG) method. Affine
transformation of the face has been done using the dlib library. A Deep
Convolutional Neural Network (Deep CNN) was trained to get unique
measurements from the human face (128 different measurement from a single
face), and Support Vector Machine (SVM) for face classification
(identification). Finally, a prototype using Raspberry Pi based minicomputer
was used for simulation.

V
ACKNOWLEDGEMENTS

We would like to pay our gratitude to the almighty Allah who created us with
not only the ability to design and program this system but also the power of
practice.

We would also like to express our sincere gratitude to our respected supervisor
Rafid Mostafiz, Lecturer, Department of CSE, Dhaka International University
for his continuous encouragement, motivation and professional guidance
during the work of this project that has provided a good basis for the present
dissertation. We are deeply grateful to him for detailed and constructive
comments and for his important support throughout this project. Without his
valuable support, this project could not elevate up to this level of development
from our point of view.

We would like to thank all the faculty members for their valuable time spend
in requirements for analysis and evaluation of this project work.

We would like to express our sincere and cordial gratitude to the people those
who have supported us directly, provided mental encouragement. Evaluated
and criticized our work in several phases during the development of this
project and for preparing this dissertation indirectly. We are also thankful to
our family and friends who has contributed directly or indirectly the
development word and its associated activities.

We warmly thank Prof. Dr. A.T.M. Mahbubur Rahman, Dean, Faculty of


Science and Engineering and Chairman, Department of Computer Science
and Engineering, Dhaka International University for his valuable advice and
moral support. His extensive discussion around work and interesting
exploration in operations has been very helpful for this study.

VI
Dedicated to

All Covid-19 Frontline Warriors

VII
TABLE OF CONTENTS

SUPERVISOR’S STATEMENT II
APPROVAL III
DECLARATION IV
ABSTRACT V
ACKNOWLEDGMENTS VI
DEDICATION VII
TABLE OF CONTENTS VIII
LIST OF FIGURES XI
LIST OF TABLES XIII

CHAPTER 1: Introduction
1.1 BACKGROUND OF THE STUDY 2
1.2 LITERATURE REVIEW 3
1.3 AIM AND OBJECTIVES 5
1.4 SCOPE OF THE STUDY 6
1.5 LIMITATIONS OF THE STUDY 6
1.6 SIGNIFICANCE OF THE STUDY 6
1.7 OPERATIONAL DEFINITION OF TERMS 7

CHAPTER 2: Project Design


2.1 SYSTEM ANALYSIS AND DESIGN 9
2.1.1 Analysis of the Existing System 9
2.1.1.1 Requirement gathering and analysis 10
2.1.1.2 Design 10
2.1.1.3 Development / Coding 10
2.1.1.4 Testing & Integration 10
2.1.1.5 Implementation 10
2.1.1.6 Maintenance 11

VIII
2.1.2 Justifications of the New System 11
2.2 DATA FLOW DIAGRAM 11

CHAPTER 3: Methodology
3.1 METHODOLOGY 14
3.1.1 Face Recognition 14
3.1.1.1 Finding all the Faces 14
3.1.1.2 Posing and Projecting Faces 17
3.1.1.3 Encoding Faces 18
3.1.1.4 Finding the person 20

CHAPTER 4: Design & Development Tools


4.1 SOFTWARE 22
4.1.1 Python 22
4.1.2 PyCharm 23
4.1.3 SQLite 24
4.2 DEPENDENCIES 25
4.2.1 OpenCV 25
4.2.2 NumPy 27
4.2.3 CMake 29
4.2.4 Dlib 30
4.2.5 Face-Recognition 30
4.2.6 Twilio 30
4.2.7 Tkinter 31
4.3 HARDWARE 31
4.3.1 Raspberry Pi 31

CHAPTER 5: Project Overview


5.1 USER INTERFACE 34
5.1.1 Home Screen 34
5.1.2 Processing Input 35

IX
5.1.3 PIN Error 35
5.1.4 No Face Found 36
5.1.5 Face Recognition Failed 37
5.1.6 OTP Processing 38
5.1.7 Wrong OTP 39
5.1.8 Transection Menu 39
5.2 OTP GENERATION 40
5.3 PROJECT SIMULATION 42

CHAPTER 6: Conclusion
6.1 CONCLUSION 44
6.2 LIMITATIONS 44
6.3 FUTURE DEVELOPMENT 44

X
LIST OF FIGURES

Figure No. Figure Name Page No.


Figure 2.1 Software Development Life Cycle 9
Figure 2.2 Data flow diagram of existing system 11
Figure 2.3 Data flow diagram of proposed system 12
Figure 3.1 Pixels from a black and white image 14
Figure 3.2 Draw an arrow to darker direction 15
Figure 3.3 Replacing all pixels with directional arrow 15
Figure 3.4 The original image is turned into a HOG representation that 16
captures the major features of the image regardless of image
brightness
Figure 3.5 Finding face using HOG 16
Figure 3.6 The 68 landmarks we will locate on every face 17
Figure 3.7 Face transformation to as close as centered 18
Figure 3.8 A single triplet training step 19
Figure 3.9 128 measurements for face in given image 20
Figure 4.1 Logo of Python 23
Figure 4.2 Logo of PyCharm 24
Figure 4.3 Logo of SQLite 25
Figure 4.4 Logo of OpenCV 26
Figure 4.5 Logo of NumPy 27
Figure 4.6 Methods in NumPy 28
Figure 4.7 Logo of CMake 29
Figure 4.8 Logo of Dlib 30
Figure 4.9 Logo of Twilio 30
Figure 4.10 Raspberry Pi 4 B 32
Figure 5.1 Home screen interface 34
Figure 5.2 Processing input interface 35

XI
Figure 5.3 PIN error interface 36
Figure 5.4 Face not found error interface 37
Figure 5.5 Facial recognition failed interface 38
Figure 5.6 OTP verification interface 38
Figure 5.7 Wrong OTP interface 39
Figure 5.8 Transection menu interface 40
Figure 5.9 OTP message from Twilio 41
Figure 5.10 Demo ATM used in this project 42

XII
LIST OF TABLES

Table No. Table Name Page No.


Table 4.1 List of software and hardware used 22

XIII
CHAPTER-1
Introduction
INTRODUCTION

1.1 BACKGROUND OF THE STUDY


Automated Teller Machine (ATM) is an electronic telecommunications device that enables
customers of financial institutions to perform financial transactions, such as cash
withdrawals, deposits, funds transfers, or account information inquiries, at any time and
without the need for direct interaction with the bank staff. The total number of automated
teller machine (ATM) booths in Bangladesh reached 10,924 at the end of December 2019
with total cash transactions through ATMs stood at about Tk 147.05 billion in the year
2019 [1].

This ever-growing technology demands a lot of security to make sure that clients can
perform their transection with adequate safety. Covering the ATM booth with Closed-
Circuit Television Camera (CCTV) and security guard (human) are physical security
measures along with other technology-based securities like firewalls, data encryptions,
network security etc. are already implemented to ensure safe ATM service for the clients.
Recently in 2016 EMV chip cards, incorporating cryptographic mechanisms and storage
of sensitive data inside the embedded integrated circuit (IC) module, were introduced in
Bangladesh to replace traditional magnetic stripe cards for more security. However, scams
like stolen cards, fake cards, card cloning, skimming, etc. [2][3] have become very
common recently and these could deceive existing security measures easily.

Machine learning is the idea that there are generic algorithms that can tell us something
interesting about a set of data without us having to write any custom code specific to the
problem. Instead of writing code, we feed data to the generic algorithm and it builds its
own logic based on the data.

With advances in Machine Learning and Computer Vision, distinguish a human face from
a digital image or a video frame from a video source and characterize each human face
with unique identification is possible. Deep learning is a subset of machine learning in
artificial intelligence that has networks capable of learning unsupervised from data that is
unstructured or unlabeled. Also known as deep neural learning or deep neural network.

Therefore, our project aims to create a facial recognition-based ATM to make sure that
every transaction is done with the consent of a related account holder.

Page | 2
1.2 LITERATURE REVIEW

In the article [4] about biometric systems the general idea is to use facial recognition to
reinforce security on one of the oldest and most secure piece of technology that is still in
use to date thus an Automatic Teller Machine. The main use for any biometric system is to
authenticate an input by Identifying and verifying it in an existing database. Security in
ATM’s has changed little since their introduction in the late 70’s. This puts them in a very
vulnerable state as technology has brought in a new breed of thieves who use the
advancement of technology to their advantage. With this in mind it is high time something
should be done about the security of this technology beside there cannot be too much
security when it comes to people’s money. As facial recognition has proven to be the most
secure method of all biometric systems to a point it is widely used in the United States for
high level security, entrusting the system even to help in the fight against terrorism. If this
system is used at this level it should show how much technology has changed in order to
make this method effective in the processes of identification and verification. With new
improved technics like Artificial Intelligence that help eliminate more disturbances and
distortions that could affect the rate of effectiveness of the system, will help in increasing
the margin of security from a simple 60-75% accuracy to 80-100% accuracy rate. These
technics will make this system impenetrable.

In another paper [5] by Deepa Malviya (2014), security approaches of ATM have been
focused on, and has been improved using biometric based authentication technique i.e. face
recognition from 3 angles. One of the main motives is to diminish and tranquillize the
effects of attacks to ATM by use of biometrics. The end result is strengthened biometric
ATM system that will be a defending approach in coming year and will escalate the
confidence of customer’s in banking sector. From her proposed conceptual model, it has
been concluded that biometric ATM systems is highly secure as it provides authentication
with the information of body part i.e., face recognition from 3 different angles. Biometric
Authentication with smart cards is a stronger method of authentication and verification as
it is uniquely bound to individuals. It is a viable approach, as it is easy to maintain and
operate with lower cost. In this paper, a new authentication technique for ATM system is
introduced for secure transaction using ATM’s. Devising a face grid algorithm and an
effective ATM simulator forms the main focus of our further research.

Nowadays, there are devices to perform biometric identification and authentication of


following: fingerprint, hand, retina, iris, face, and voice. Various ideas are given by
researchers for biometric authentication including- fingerprint, iris and retina, voice, etc.
Fingerprint approach for identification given by Oko S. and Oruh J. (2012) [6] not proved
efficient as when citizen will move to ATM system, fingers may become dirty from natural
environment and will not be able to access his account with ATM system, since fingerprints
will not match from the one that was traced during identification. Secondly, an iris and
Page | 3
retina approach proposed by Bhosale S. and Sawant B. (2012) [7] as an identification
method, but citizens might not want a laser beamed into their eyes for retina scan at every
time he wants to access account through ATM. Thus, iris and retina as identification
authentication proved inefficient. Voice was also proposed for security in ATM systems as
a biometric with smart card. The cons were there at the same time as two citizens can have
same voice and one can easily hack and can fraud with another’s account.

In his research titled ―A Third Generation Automated Teller Machine Using Universal
Subscriber Module with Iris Recognition‖ [8], pointed out that in real time ATM cards are
being used as a form of identification and authentication. But there is a highest possibility
for the ATM cards to be theft or lost and even if the card is bent or heated, it becomes
useless to access the ATM machine. With the increase of automated teller machine (ATM)
frauds, new authentication mechanisms are developed to overcome security problems. One
inherent problem with ATM cards is the possibility of loss or theft and it should be carried
for each and every transaction, which we forget to do in many cases.

According to [9], for face recognition, there are two types of comparisons. The first is
verification, this is where the system compares the given individual with who that
individual says they are and gives a yes or no decision. The next one is identification this
is where the system compares the given individual to all the other individuals in the
database and gives a ranked list of matches. Face recognition technology analyzes the
unique shape, pattern and positioning of the facial features. Face recognition is very
complex technology and is largely software based.

Eum et al [10] in their research, viewed that biometrics has been extensively utilized to
lessen the ATM-related crimes. One of the most widely used methods is to capture the
facial images of the users for follow-up criminal investigations. However, this method is
vulnerable to attacks made by the criminals with heavy facial occlusions. In today's
scenario of banking operations, user identity protection, password protection is no longer
safe to guard your personal information, in his paper [11], they tried to explain different
types of vulnerabilities and loose points which are attempted at the time of financial
operations and generates fraud transactions due to fake entries and fake cards which makes
the ATM vulnerable.

As per [12], the most significant impact of ATM technology is the customer’s ability to
withdraw money outside banking hours. But this feat achieved by ATM technology is not
without challenges. ATM technology is prone to fraud, and this has made many people
shun its use. As suggested by [13], biometric authentication has a great potential to improve
the security, reduce cost, and enhance the customer convenience of payment systems.
Despite these benefits, biometric authentication has not yet been adopted by large-scale
point-of-sale and automated teller machine systems. [14] discussed that newly-emerging
Page | 4
trend in facial recognition software uses a 3D model, which claims to provide more
accuracy. Capturing a real-time 3-D image of a person's facial surface, 3D facial
recognition uses distinctive features of the face -- where rigid tissue and bone is most
apparent, such as the curves of the eye socket, nose and chin -- to identify the subject.
These areas are all unique and don't change over time.

The moment the card is accessible, PIN is guessed or obtained through other means such
as social engineering, shoulder surfing or outright collection under duress. Recently,
Biometric ATMs are introduced to be used along with card. This will definitely impact on
the amount frauds if fully implemented. Further development has produced biometric
authentication in Japan where customers face is used as a means of authentication [15, 16].

There will be great advantage to use double authentication for security purpose as one will
be required to have both ATM PIN and his/her facial representations in order to have access
to the transaction. This will dramatically reduce some card theft incidences as one may
have the password/PIN of the card but will again be required to have facial match with the
card owner. And in case there are two identical twins who are closely related to each other
still the PIN will decide who us the real owner of the ATM card. However, in case the card
owner gets accident or get injured in the face, then he will be prompted to go to the bank
where his account details are stored in the database in order to change the image stored to
match the current image. In case customer have forgotten his password for the ATM card,
then there will be no option rather than going to the respective bank where he firstly opened
his account so as to have PIN reset. [17]

1.3 AIM AND OBJECTIVES

In the above context, the main objectives of this project are to add a new layer of security
over the existing ATM system without any existing hardware changes such that
transactions will not only depend on the correct PIN of a card but also the person
performing the transaction. We are going to resolve the following three main questions
throughout our project:
1. How ATM will be distinguished original account holder from others?
2. What if ATM could not identify the original account holder?
3. What are the consequences if others try to access the ATM?

Upon completion of this project, all the financial institutions, providing ATM services, will
be able to ensure a much safer environment for their clients. Also, the rate of ATM card
fraud in Bangladesh will lessen. We developed a prototype to simulate the entire works
within a minimal scale.
Page | 5
1.4 SCOPE OF THE STUDY

The scope of this work is to develop an enhanced Automated Teller Machine (ATM)
system that will improve the existing ATM security standard by using facial recognition.
The system will be a software-based solution and need not require any new hardware
changes of existing system. We used Python 3 as the preferred programming language for
building the user interface as well as the backend development. SQLite database was used
for database design and Raspberry Pi 4 to show the simulation. However, the actual process
will use existing database and server to perform its operation.

1.5 LIMITATIONS OF THE STUDY

The efficiency of the camera feed from ATM booth can be reduced due to low light and
the system might not recognize the original client. Therefore, client need to provide OTP
for authentication. Also, the system not fully capable to distinguish between a real human
and a digital image/video; therefore, it is possible to fool the system. In addition, we have
used API of a pretrained model to detect and recognize face due to limited time, limited
data and computing capacity. Thus, we need to develop and train our own models to get
even more accuracy. Finally, we have to add further network security like Blockchain and
SDN to make this system a complete package.

1.6 SIGNIFICANCE OF THE STUDY

There will be great advantage to use face recognition-based authentication for security
purpose as one will be required to have both ATM PIN and his/her facial representations
in order to have access to the transaction. This will dramatically reduce some card theft
incidences as one may have the password/PIN of the card but will again be required to have
facial match with the card owner. And in case there are two identical twins who are closely
related to each other still the PIN will decide who us the real owner of the ATM card.

Since our project will not require any sorts of hardware changes from existing setup,
therefore, no additional cost needed to adapt this system.

Page | 6
1.7 OPERATIONAL DEFINITION OF TERMS

Biometrics is physiological or behavioral characteristics unique to individuals, this Include


Fingerprint, hand geometry, handwriting, iris, retinal, vein and voice.

PIN is Personal Identification Number.

OTP is One Time Password.

HOG is Histogram of Oriented Gradients

Affine Transformation corrects the angle of face relatively to create a front facing straight
face.

Deep CNN stands for Deep Convolutional Neural Network.

SVM or Support Vector Machine is a machine learning model that help a machine to take
decision.

Rapid Application Development is a concept that products can be developed faster and
of higher quality.

Authentication is the process of determining whether someone or something is, in fact,


who or what it declared to be.

Face Detection is a process in which computer process a digital image to locate human
face if there is any.

Face Recognition is a process to compare two different human face to identify whether
both images are of one human.

Page | 7
CHAPTER-2
Project Design
PROJECT DESIGN

2.1 SYSTEM ANALYSIS AND DESIGN


In order to design a web site, the relational database must be designed first. Conceptual
design can be divided into two parts: The data model and the process model. The data
model focuses on what data should be stored in the database while the process model deals
with how the data is processed.

2.1.1 Analysis of the Existing System


ATM playing very important role in banking sector for its round the clock availability as
well as more in numbers compare to branch of any bank. Number of people using ATM is
increasing day by day and to meet the demand even banks expanding their ATM network
in remote areas. Entire ATM system is automated to function smoothly. However, security
has become more challenging as technology advances. Existing chip-based card and PIN
could not able to make sure adequate safety for clients. Using modern technology, one can
breach the security very easily which make all account vulnerable and throw a big
challenge to ATM service providing companies. These companies need to solve the
problem very soon and this can only be accomplished by introducing advance technology.

There are six phases in every Software development life cycle model:

Figure 2.1: Software Development Life Cycle


Page | 9
2.1.1.1 Requirement gathering and analysis
Business requirements are gathered in this phase. This phase is the main focus of the project
managers and stake holders. Meetings with managers, stake holders and users are held in
order to determine the requirements like; who is going to use the system? How will they
use the system? What data should be input into the system? What data should be output by
the system? These are general questions that get answered during a requirement gathering
phase. After requirement gathering these requirements are analyzed for their validity and
the possibility of incorporating the requirements in the system to be development is also
studied.

Finally, a Requirement Specification document is created which serves the purpose of


guideline for the next phase of the model. The testing team follows the Software Testing
Life Cycle and starts the Test Planning phase after the requirements analysis is completed.

2.1.1.2 Design
In this phase the system and software design are prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying
hardware and system requirements and also helps in defining overall system architecture.
The system design specifications serve as input for the next phase of the model. In this
phase the testers come up with the Test strategy, where they mention what to test, how to
test.

2.1.1.3 Development / Coding


On receiving system design documents, the work is divided in modules/units and actual
coding is started. Since, in this phase the code is produced so it is the main focus for the
developer. This is the longest phase of the software development life cycle.

2.1.1.4 Testing & Integration


After the code is developed it is tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered during the requirements phase.
During this phase all types of functional testing like unit testing, integration testing, system
testing, acceptance testing is done as well as non-functional testing are also done.

2.1.1.5 Implementation
After successful testing the product is delivered / deployed to the customer for their use.
As soon as the product is given to the customers, they will first do the beta testing. If any
changes are required or if any bugs are caught, then they will report it to the engineering
team. Once those changes are made or the bugs are fixed then the final deployment will
happen.

Page | 10
2.1.1.6 Maintenance
Once when the customers start using the developed system then the actual problems come
up and needs to be solved from time to time. This process where the care is taken for the
developed product is known as maintenance.

2.1.2 Justifications of the New System


The new system is going to deal with the limitations of the existing system by adding
additional layer of verification based on facial recognition between account holder and the
person trying to access ATM. Also, it will provide OTP verification option if face
recognition failed. The new system will perform its functionality fully in backend and user
will not require to understand any new interface.

2.2 DATA FLOW DIAGRAM

Figure 2.2: Data flow diagram of existing system

Page | 11
Figure 2.3: Data flow diagram of proposed system

Page | 12
CHAPTER-3
Methodology
METHODOLOGY

3.1 METHODOLOGY
The main feature we used in this project is the facial recognition over the existing ATM
system. The human face is identified using the Histogram of Oriented Gradients (HOG)
method. Affine transformation of the face is being done using the dlib library. A Deep
Convolutional Neural Network (Deep CNN) is trained to get unique measurements from
the human face (128 different measurement from a single face), and Support Vector
Machine (SVM) for face classification (identification).

3.1.1 Face Recognition


Let’s tackle this problem one step at a time. For each step, we will discuss about a different
machine learning algorithm. We are not going to explain every single algorithm completely
rather we will discuss the main ideas behind each one.

3.1.1.1 Finding all the Faces


The first step is face detection. Obviously, we need to locate the faces in a photograph
before we can try to tell them apart! If we have taken any camera in the last 10 years, we
probably seen face detection in action. Face detection went mainstream in the early 2000's
when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to
run on cheap cameras. However, much more reliable solutions exist now. We used a
method invented in 2005 called Histogram of Oriented Gradients or just HOG for short.

To find faces in an image, we will start by making our image black and white because we
do not need color data to find faces. Then we will look at every single pixel in our image
one at a time. For every single pixel, we want to look at the pixels that directly surrounding
it

Figure 3.1: Pixels from a black and white image

Page | 14
Our goal is to figure out how dark the current pixel is compared to the pixels directly
surrounding it. Then we want to draw an arrow showing in which direction the image is
getting darker:

Figure 3.2: Draw an arrow to darker direction

If we repeat that process for every single pixel in the image, we will end up with every
pixel being replaced by an arrow. These arrows are called gradients and they show the flow
from light to dark across the entire image:

Figure 3.3: Replacing all pixels with directional arrow

This might seem like a random thing to do, but there is a really good reason for replacing
the pixels with gradients. If we analyze pixels directly, really dark images and really light
images of the same person will have totally different pixel values. But by only considering
the direction that brightness changes, both really dark images and really bright images will
end up with the same exact representation. But saving the gradient for every single pixel
gives us way too much detail. We end up missing the forest for the trees. It would be better
if we could just see the basic flow of lightness/darkness at a higher level so we could see
the basic pattern of the image.
Page | 15
To do this, we will break up the image into small squares of 16x16 pixels each. In each
square, we will count up how many gradients point in each major direction (how many
points up, point up-right, point right, etc.). Then we will replace that square in the image
with the arrow directions that were the strongest. The end result is we turn the original
image into a very simple representation that captures the basic structure of a face in a simple
way:

Figure 3.4: The original image is turned into a HOG representation that captures the
major features of the image regardless of image brightness

To find faces in this HOG image, all we have to do is find the part of our image that looks
the most similar to a known HOG pattern that was extracted from a bunch of other training
faces:

Figure 3.5: Finding face using HOG


Page | 16
3.1.1.2 Posing and Projecting Faces
We isolated the faces in our image. But now we have to deal with the problem that faces
turned different directions look totally different to a computer. To account for this, we will
try to warp each picture so that the eyes and lips are always in the sample place in the
image. This will make it a lot easier for us to compare faces in the next steps.

To do this, we are going to use an algorithm called face landmark estimation. There are
lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid
Kazemi and Josephine Sullivan. The basic idea is we will come up with 68 specific points
(called landmarks) that exist on every face — the top of the chin, the outside edge of each
eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm
to be able to find these 68 specific points on any face:

Figure 3.6: The 68 landmarks we will locate on every face.

Now that we know where the eyes and mouth are, we will simply rotate, scale and shear
the image so that the eyes and mouth are centered as best as possible. We would not do any
fancy 3d warps because that would introduce distortions into the image. We are only going
to use basic image transformations like rotation and scale that preserve parallel lines (called

Page | 17
affine transformations). Now no matter how the face is turned, we are able to center the
eyes and mouth are in roughly the same position in the image. This will make our next step
a lot more accurate.

Figure 3.7: Face transformation to as close as centered

3.1.1.3 Encoding Faces


The simplest approach to face recognition is to directly compare the unknown face we
found after 3.1.1.2 with all the pictures we have of people that have already been tagged.
When we find a previously tagged face that looks very similar to our unknown face, it must
be the same person. Seems like a pretty good idea, right? However, there is actually a huge
problem with that approach. A site like Facebook with billions of users and a trillion photos
cannot possibly loop through every previous-tagged face to compare it to every newly
uploaded picture. That would take way too long. They need to be able to recognize faces
in milliseconds, not hours. What we need is a way to extract a few basic measurements
from each face. Then we could measure our unknown face the same way and find the
known face with the closest measurements. For example, we might measure the size of
each ear, the spacing between the eyes, the length of the nose, etc.

It turns out that the measurements that seem obvious to us humans (like eye color) do not
really make sense to a computer looking at individual pixels in an image. Researchers have
discovered that the most accurate approach is to let the computer figure out the
measurements to collect itself. Deep learning does a better job than humans at figuring out
which parts of a face are important to measure. The solution is to train a Deep
Convolutional Neural Network. We are going to train it to generate 128 measurements for
each face. The training process works by looking at 3 face images at a time:
1. Load a training face image of a known person
2. Load another picture of the same known person
3. Load a picture of a totally different person

Then the algorithm looks at the measurements it is currently generating for each of those
three images. It then tweaks the neural network slightly so that it makes sure the

Page | 18
measurements it generates for #1 and #2 are slightly closer while making sure the
measurements for #2 and #3 are slightly further apart:

Figure 3.8: A single triplet training step

After repeating this step millions of times for millions of images of thousands of different
people, the neural network learns to reliably generate 128 measurements for each person.
Any ten different pictures of the same person should give roughly the same measurements.
Machine learning people call the 128 measurements of each face an embedding. The idea
of reducing complicated raw data like a picture into a list of computer-generated numbers
comes up a lot in machine learning (especially in language translation). The exact approach
for faces we are using was invented in 2015 by researchers at Google but many similar
approaches exist.

This process of training a convolutional neural network to output face embeddings requires
a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes
about 24 hours of continuous training to get good accuracy. But once the network has been
trained, it can generate measurements for any face, even ones it has never seen before! So,
this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did
this and they published several trained networks which we can directly use. Thanks
Brandon Amos and team!
Page | 19
So, all we need to do ourselves is run our face images through their pre-trained network to
get the 128 measurements for each face. Here’s the measurements for our test image:

Figure 3.9: 128 measurements for face in given image

So, what parts of the face are these 128 numbers measuring exactly? It turns out that we
have no idea. It doesn’t really matter to us. All that we care is that the network generates
nearly the same numbers when looking at two different pictures of the same person.

3.1.1.4 Finding the person


This last step is actually the easiest step in the whole process. All we have to do is find
whether the person in our database of known people has the closest measurements to our
test image or not.

We can do that by using any basic machine learning classification algorithm. No fancy
deep learning tricks are needed. We will use a simple linear SVM classifier, but lots of
classification algorithms could work. All we need to do is train a classifier that can take in
the measurements from a new test image and tells whether known person is the closest
match or not. Running this classifier takes milliseconds.

Page | 20
CHAPTER-4
Design & Development Tools
DESIGN & DEVELOPMENT TOOLS

In this project we mainly used software to develop the entire idea. However, we have used
few hardware for simulation. There are also few python dependencies used in this project.

Table 4.1: List of software and hardware used


Name version Category
Python 3.7
PyCharm 2020.2 Software
SQLite 3
opencv-python 3.4.6.27
numpy 1.19.1
cmake 3.18.2
dlib 19.18.0 Dependency
face-recognition 1.3.0
twilio 6.45.3
tkinter 8.6
Raspberry PI 4
USB Generic Webcam -
Hardware
Numeric Pad (USB) -
7” HD TFT Color Monitor -

4.1 SOFTWARE

4.1.1 Python [18]


Python is an interpreted, high-level and general-purpose programming language. Created
by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes
code readability with its notable use of significant whitespace. Its language constructs and
object-oriented approach aim to help programmers write clear, logical code for small and
large-scale projects. Python is dynamically typed and garbage-collected. It supports
multiple programming paradigms, including structured (particularly, procedural), object-
oriented, and functional programming. Python is often described as a "batteries included"
language due to its comprehensive standard library. Python was created in the late 1980s
as a successor to the ABC language. Python 2.0, released in 2000, introduced features like
list comprehensions and a garbage collection system with reference counting.

Page | 22
Python 3.0, released in 2008, was a major revision of the language that is not completely
backward-compatible, and much Python 2 code does not run unmodified on Python 3. The
Python 2 language was officially discontinued in 2020 (first planned for 2015), and Python
2.7.18 is the last Python 2.7 release and therefore the last Python 2 release. No more
security patches or other improvements will be released for it. With Python 2's end-of-life,
only Python 3.6.x and later are supported.

Figure 4.1: Logo of Python

Python interpreters are available for many operating systems. A global community of
programmers develops and maintains CPython, a free and open-source reference
implementation. A non-profit organization, the Python Software Foundation, manages and
directs resources for Python and CPython development.

4.1.2 PyCharm [19]


PyCharm is an integrated development environment (IDE) used in computer programming,
specifically for the Python language. It is developed by the Czech company JetBrains. It
provides code analysis, a graphical debugger, an integrated unit tester, integration with
version control systems (VCSes), and supports web development with Django as well as
data science with Anaconda.

PyCharm is cross-platform, with Windows, macOS and Linux versions. The Community
Edition is released under the Apache License, and there is also Professional Edition with
extra features – released under a proprietary license.

Some common features of PyCharm are:


• Coding assistance and analysis, with code completion, syntax and error
highlighting, linter integration, and quick fixes
• Project and code navigation: specialized project views, file structure views and
quick jumping between files, classes, methods and usages
• Python refactoring: includes rename, extract method, introduce variable, introduce
constant, pull up, push down and others
Page | 23
• Support for web frameworks: Django, web2py and Flask [professional edition
only]
• Integrated Python debugger
• Integrated unit testing, with line-by-line code coverage
• Google App Engine Python development [professional edition only]
• Version control integration: unified user interface for Mercurial, Git, Subversion,
Perforce and CVS with change lists and merge
• Support for scientific tools like matplotlib, numpy and scipy [professional edition
only]
• It competes mainly with a number of other Python-oriented IDEs, including
Eclipse's PyDev, and the more broadly focused Komodo IDE.

Figure 4.2: Logo of PyCharm

4.1.3 SQLite [20]


SQLite is a relational database management system (RDBMS) contained in a C library. In
contrast to many other database management systems, SQLite is not a client–server
database engine. Rather, it is embedded into the end program.

SQLite is ACID-compliant and implements most of the SQL standard, generally following
PostgreSQL syntax. However, SQLite uses a dynamically and weakly typed SQL syntax
that does not guarantee the domain integrity. This means that one can, for example, insert
a string into a column defined as an integer. SQLite will attempt to convert data between
formats where appropriate, the string "123" into an integer in this case, but does not
guarantee such conversions and will store the data as-is if such a conversion is not possible.

Page | 24
SQLite is a popular choice as embedded database software for local/client storage in
application software such as web browsers. It is arguably the most widely deployed
database engine, as it is used today by several widespread browsers, operating systems,
and embedded systems (such as mobile phones), among others. SQLite has bindings to
many programming languages.

Some common features of SQLite are:


• SQLite uses an unusual type system for a SQL-compatible DBMS
• Tables normally include a hidden rowid index column, which gives faster access
• SQLite with full Unicode function is optional.
• Several computer processes or threads may access the same database concurrently.
• SQLite version 3.7.4 first saw the addition of the FTS4 (full-text search) module,
which features enhancements over the older FTS3 module
• In 2015, with the json1 extension and new subtype interfaces, SQLite version 3.9
introduced JSON content managing.
• As per 3.33.0 version release max db size is 281 TB.

Figure 4.3: Logo of SQLite

4.2 DEPENDENCIES

4.2.1 OpenCV [21]


OpenCV is a huge open-source library for computer vision, machine learning, and image
processing. OpenCV supports a wide variety of programming languages like Python, C++,
Java, etc. It can process images and videos to identify objects, faces, or even the
handwriting of a human. When it is integrated with various libraries, such as Numpy which
is a highly optimized library for numerical operations, then the number of weapons
increases in your Arsenal i.e whatever operations one can do in Numpy can be combined
with OpenCV.

Page | 25
This OpenCV tutorial will help you learn the Image-processing from Basics to Advance,
like operations on Images, Videos using a huge set of Opencv-programs and projects.

OpenCV's application areas include:


• 2D and 3D feature toolkits
• Egomotion estimation
• Facial recognition system
• Gesture recognition
• Human–computer interaction (HCI)
• Mobile robotics
• Motion understanding
• Object identification
• Segmentation and recognition
• Stereopsis stereo vision: depth perception from 2 cameras
• Structure from motion (SFM)
• Motion tracking
• Augmented reality
• To support some of the above areas, OpenCV includes a statistical machine
learning library that contains:
• Boosting
• Decision tree learning
• Gradient boosting trees
• Expectation-maximization algorithm
• k-nearest neighbor algorithm
• Naive Bayes classifier
• Artificial neural networks
• Random forest
• Support vector machine (SVM)
• Deep neural networks (DNN)

Figure 4.4: Logo of OpenCV


Page | 26
4.2.2 NumPy [22]
NumPy is a library for the Python programming language, adding support for large, multi-
dimensional arrays and matrices, along with a large collection of high-level mathematical
functions to operate on these arrays. The ancestor of NumPy, Numeric, was originally
created by Jim Hugunin with contributions from several other developers. In 2005, Travis
Oliphant created NumPy by incorporating features of the competing Numarray into
Numeric, with extensive modifications. NumPy is open-source software.

Some common features of NumPy are:


• NumPy targets the CPython reference implementation of Python, which is a non-
optimizing bytecode interpreter. Mathematical algorithms written for this version
of Python often run much slower than compiled equivalents. NumPy addresses the
slowness problem partly by providing multidimensional arrays and functions and
operators that operate efficiently on arrays, requiring rewriting some code, mostly
inner loops, using NumPy.
• Using NumPy in Python gives functionality comparable to MATLAB since they
are both interpreted, and they both allow the user to write fast programs as long as
most operations work on arrays or matrices instead of scalars. In comparison,
MATLAB boasts a large number of additional toolboxes, notably Simulink,
whereas NumPy is intrinsically integrated with Python, a more modern and
complete programming language. Moreover, complementary Python packages are
available; SciPy is a library that adds more MATLAB-like functionality and
Matplotlib is a plotting package that provides MATLAB-like plotting functionality.
Internally, both MATLAB and NumPy rely on BLAS and LAPACK for efficient
linear algebra computations.
• Python bindings of the widely used computer vision library OpenCV utilize NumPy
arrays to store and operate on data. Since images with multiple channels are simply
represented as three-dimensional arrays, indexing, slicing or masking with other
arrays are very efficient ways to access specific pixels of an image. The NumPy
array as universal data structure in OpenCV for images, extracted feature points,
filter kernels and many more vastly simplifies the programming workflow and
debugging.

Figure 4.5: Logo of NumPy

Page | 27
Figure 4.6: Methods in NumPy
Page | 28
4.2.3 CMake [23]
CMake is a cross-platform free and open-source software tool for managing the build
process of software using a compiler-independent method. It supports directory hierarchies
and applications that depend on multiple libraries. It is used in conjunction with native
build environments such as Make, Qt Creator, Ninja, Apple's Xcode, and Microsoft Visual
Studio. It has minimal dependencies, requiring only a C++ compiler on its own build
system.

Some common features of CMake are:


• CMake can handle in-place and out-of-place builds, enabling several builds from
the same source tree, and cross-compilation. The ability to build a directory tree
outside the source tree is a key feature, ensuring that if a build directory is removed,
the source files remain unaffected.
• CMake can locate executables, files, and libraries. These locations are stored in a
cache, which can then be tailored before generating the target build files. The cache
can be edited with a graphical editor, which is included in the project.
• Complicated directory hierarchies and applications that rely on several libraries are
well supported by CMake. For instance, CMake is able to accommodate a project
that has multiple toolkits, or libraries that each have multiple directories. In
addition, CMake can work with projects that require executables to be created
before generating code to be compiled for the final application. Its open-source,
extensible design allows CMake to be adapted as necessary for specific projects.
• CMake can generate project files for several prominent IDEs, such as Microsoft
Visual Studio, Xcode, and Eclipse CDT. It can also produce build scripts for
MSBuild or NMake on Windows; Unix Make on Unix-like platforms such as
Linux, macOS, and Cygwin; and Ninja on both Windows and Unix-like platforms.

Figure 4.7: Logo of CMake

Page | 29
4.2.4 Dlib [24]
Dlib is a general-purpose cross-platform software library written in the programming
language C++. Its design is heavily influenced by ideas from design by contract and
component-based software engineering. Thus, it is, first and foremost, a set of independent
software components. It is open-source software released under a Boost Software License.

Since development began in 2002, Dlib has grown to include a wide variety of tools. As of
2016, it contains software components for dealing with networking, threads, graphical user
interfaces, data structures, linear algebra, machine learning, image processing, data mining,
XML and text parsing, numerical optimization, Bayesian networks, and many other tasks.
In recent years, much of the development has been focused on creating a broad set of
statistical machine learning tools and in 2009 Dlib was published in the Journal of Machine
Learning Research. Since then it has been used in a wide range of domains.

Figure 4.8: Logo of Dlib

4.2.5 Face-Recognition [25]


Recognize and manipulate faces from Python or from the command line with the world’s
simplest face recognition library. Built using dlib’s state-of-the-art face recognition built
with deep learning. The model has an accuracy of 99.38% on the Labeled Faces in the Wild
benchmark. This also provides a simple face_recognition command line tool that lets you
do face recognition on a folder of images from the command line!

4.2.6 Twilio [26]


Twilio is an American cloud communications platform as a service (CPaaS) company
based in San Francisco, California. Twilio allows software developers to programmatically
make and receive phone calls, send and receive text messages, and perform other
communication functions using its web service APIs.

Figure 4.9: Logo of Twilio

Page | 30
4.2.7 Tkinter [27]
Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the
Tk GUI toolkit, and is Python's de facto standard GUI. Tkinter is included with standard
Linux, Microsoft Windows and Mac OS X installs of Python. The name Tkinter comes
from Tk interface. Tkinter was written by Fredrik Lundh. Tkinter is free software released
under a Python license.

Creating a GUI application using Tkinter is an easy task. All we need to do is perform the
following steps:
• Import the Tkinter module.
• Create the GUI application main window.
• Add one or more of the above-mentioned widgets to the GUI application.
• Enter the main event loop to take action against each event triggered by the user.

4.3 HARDWARE

4.3.1 Raspberry Pi [28]


The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation. Early on, the Raspberry Pi project leaned
towards the promotion of teaching basic computer science in schools and in developing
countries. Later, the original model became far more popular than anticipated, selling
outside its target market for uses such as robotics. It is now widely used in many areas,
such as for weather monitoring, because of its low cost and high portability. It does not
include peripherals (such as keyboards and mice) or cases. However, some accessories
have been included in several official and unofficial bundles.

After the release of the second board type, the Raspberry Pi Foundation set up a new entity,
named Raspberry Pi Trading, and installed Eben Upton as CEO, with the responsibility of
developing technology. The Foundation was rededicated as an educational charity for
promoting the teaching of basic computer science in schools and developing countries.

The Raspberry Pi is one of the best-selling British computers. As of December 2019, more
than thirty million boards have been sold. Most Pis are made in a Sony factory in Pencoed,
Wales, while others are made in China and Japan.

Page | 31
Figure 4.10: Raspberry Pi 4 B

Specification of Raspberry Pi 4 used in this project:


• Broadcom BCM2711, Quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
• 2GB LPDDR4-3200 SDRAM
• 2.4 GHz and 5.0 GHz IEEE 802.11ac wireless, Bluetooth 5.0, BLE
• Gigabit Ethernet
• 2 USB 3.0 ports; 2 USB 2.0 ports.
• Raspberry Pi standard 40 pin GPIO header
• 2 × micro-HDMI ports (up to 4k p60 supported)
• 2-lane MIPI DSI display port
• 2-lane MIPI CSI camera port
• 4-pole stereo audio and composite video port
• H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode)
• OpenGL ES 3.0 graphics
• 64 Gb ROM (Micro-SD)
• 5V DC via USB-C connector (minimum 3A*)
• 5V DC via GPIO header (minimum 3A*)
• Power over Ethernet (PoE) enabled (requires separate PoE HAT)
• Operating temperature: 0 – 50 degrees C ambient
• Operating System: Raspberry Pi OS (previously called Raspbian)

Page | 32
CHAPTER-5
Project Overview
PROJECT OVERVIEW

5.1 USER INTERFACE


In this chapter we will go through all user interfaces of our project. As we are not proposing
a completely different system implementation, we did not update user interfaces much.
Existing ATM already has various interfaces based on the bank providing it.

5.1.1 Home Screen


Home screen is the interface we see at the very beginning of any transection. Typically, we
insert our ATM card at this stage to start our transection process followed by providing
PIN. However, for simplicity, we asked to provide account number and PIN directly.

Figure 5.1: Home screen interface

Page | 34
5.1.2 Processing Input
Once account and PIN are being given, user will see interface as Figure 5.2 where the
system will check the given details with the database of respected bank. Also, at this point,
the camera of ATM will collect video footage for facial recognition.

Figure 5.2: Processing input interface

5.1.3 PIN Error


If the provided PIN does not match with database, an error message will be displayed like
Figure 5.3. User can start the process again by pressing ‘Yes’ as well as can terminate the
process by pressing ‘No’.

Page | 35
Figure 5.3: PIN error interface

5.1.4 No Face Found


If the provided PIN matched with database, then the facial recognition process will be
started. System will first try to detect human face in the video footage collected by the
camera of ATM. If there is no human face present in the video footage, then an error
message will be displayed like Figure 5.4 and user can start the process again by pressing
‘Yes’ as well as can terminate the process by pressing ‘No’.

Page | 36
Figure 5.4: Face not found error interface

5.1.5 Face Recognition Failed


If system can detect human face in the video footage collected by the camera of ATM, then
it will extract all frames from the video footage and compare to the account holder’s image
taken from the database. If the facial recognition failed, an error message will be displayed
like Figure 5.5 and OTP will be sent to registered mobile number of the account holder.
User need to provide the OTP in this screen.

Page | 37
Figure 5.5: Facial recognition failed interface

5.1.6 OTP Processing


Upon entering the OTP received in phone, the system will validate if the OTP is correct or
not. Meanwhile, a processing screen will be displayed like Figure 5.6

Figure 5.6: OTP verification interface


Page | 38
5.1.7 Wrong OTP
If the provided OTP des not match with the system, an error message will be displayed like
Figure 5.7. User can start the process again by pressing ‘Yes’ as well as can terminate the
process by pressing ‘No’.

Figure 5.7: Wrong OTP interface

5.1.8 Transection Menu


Transection menu is the place where a user can perform all kind of operations like money
withdraw, balance checking, fund transfer etc. Now in our system any user can reach to
this particular interface with any of these two processes,
• Correct PIN + Positive Identification of Face
• Correct PIN + Negative Identification of Face + Correct OTP

Page | 39
Figure 5.8: Transection menu interface

5.2 OTP GENERATION


We used a trial account of Twilio to demonstrate OPT process. Bank can choose any
existing process to generate OPT. Moreover, bank may use automated voice call to collect
PIN for this stage of verification.

Page | 40
Figure 5.9: OTP message from Twilio

Page | 41
5.3 PROJECT SIMULATION
For simulation purpose, we used a raspberry pi 4 mini computer-based system as an ATM.

Figure 5.10: Demo ATM used in this project

Page | 42
CHAPTER-6
Conclusion
CONCLUSION

6.1 CONCLUSION
According to visited literature review which brings about the secondary data sources and
some few primary data sources, it seems that there is potential threat posed to the ATM
users either in robbery or in lost cards. The purpose of this project was to visit the literature
in ATM security system and to propose one which will be more secure compared to the
existing system.

As facial recognition has proven to be the most secure method of all biometric systems to
a point it is widely used in the United States for high level security, entrusting the system
even to help in the fight against terrorism. If this system is used at this level it should show
how much technology has changed in order to make this method effective in the processes
of identification and verification. With new improved technics like Artificial Intelligence
that help eliminate more disturbances and distortions that could affect the rate of
effectiveness of the system, will help in increasing the margin of security from a simple
60-75% accuracy to 80-100% accuracy rate. These technics will make this system
impenetrable. Biometric Authentication with smart cards is a stronger method of
authentication and verification as it is uniquely bound to individuals. It is a viable approach,
as it is easy to maintain and operate with lower cost. In this project, a new authentication
technique for ATM system is introduced for secure transaction using ATM’s.

From above explanation, having both ways of logging in, in the ATM will be safer than
having only one way of accessing transactions, that is to say having PIN accesses and facial
recognition login credentials creates more security as one has to pass both security barriers
before having access to the transactions.

6.2 LIMITATIONS
There are some limitations for the current system to which solutions can be provided as a
future development:
• Failed to distinguish identical twins.
• Failed to protect if registered phone is stolen.

6.3 FUTURE DEVELOPMENT


Image processing and machine learning is such a huge topic that, we can update and add
features as much as want if we have more time. Also, continuous development of these
sector required us to upgrade our project time to time. However, due to lack of time we
could not able to add few features we already thought of. Along with removing all

Page | 44
limitations mentioned above, we will add more feature to this project. Features including
but not limited to,
• Make a complete package of E-ATM
• Introduce BLOCKCHAIN, Software-Defined Networking (SDN)
• Integrate IoT in complete package
• Construct and train customized Deep CNN
• Improve system’s capability to distinguish real human face and digital or printed
image

Page | 45
REFERENCES
[1] https://thefinancialexpress.com.bd/trade/number-of-atm-booth-reaches-10924-1582012185
(last accessed on 10-09-2020)

[2] Saha, Anuva & Rahman, Md. Mijanur. (2018). Automated Teller Machine Card Fraud of
Financial Organizations in Bangladesh. Journal of Computer Science Applications and Information
Technology. 3. 10.15226/2474-9257/3/1/00126

[3] https://www.dhakatribune.com/bangladesh/crime/2018/04/25/bank-card-cloning-mastermind-
held-1500-fake-cards (last accessed on 10-09-2020)

[4] Babaei, Hossein & Molalapata, Ofentse & Pandor, Abdul-Hay. (2012). Face Recognition
Application for Automatic Teller Machines.

[5] Malviya D. Face recognition technique: Enhanced safety approach for ATM. International
Journal of Scientific and Research Publications. 2014 Dec;4(12):1-6.

[6] Prof. Selina Oko and Jane Oruh, “ENHANCED ATM SECURITY SYSTEM USING
BIOMETRICS”, IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 5, No 3,
September 2012

[7] S.T. Bhosale and Dr. B.S.Sawant “SECURITY IN E-BANKING VIA CARD LESS
BIOMETRIC ATMS”, International Journal of Advanced Technology & Engineering Research,
Volume 2, Issue 4, July 2012.

[8] B. S. Raj, "A Third Generation Automated Teller Machine Using Universal Subscriber Module
with Iris Recognition," image, vol. 1, 2013.

[9] K. J. Peter, G. Nagarajan, G. G. S. Glory, V. V. S. Devi, S. Arguman, and K. S. Kannan,


"Improving ATM security via face recognition," in Electronics Computer Technology (ICECT),
2011 3rd International Conference on, 2011, pp. 373-376.

[10] S. Eum, J. K. Suhr, and J. Kim, "Face recognizability evaluation for atm applications with
exceptional occlusion handling," in Computer Vision and Pattern Recognition Workshops
(CVPRW), 2011 IEEE Computer Society Conference on, 2011, pp. 82-89.

[11] N. Sharma, "Analysis of different vulnerabilities in auto teller machine transactions," Journal
of Global Research in Computer Science, vol. 3, pp. 38-40, 2012.

[12] G. N. Odachi, "ATM Technology and Banking System in West African Sub-Region: Prospects
and Challenges," African Research Review, vol. 5, 2011.

Page | 46
[13] J. Breebaart, I. Buhan, K. de Groot, and E. Kelkboom, "Evaluation of a template protection
approach to integrate fingerprint biometrics in a PIN-based payment infrastructure," Electronic
Commerce Research and Applications, vol. 10, pp. 605-614, 2011.

[14] S. Thorat, S. Nayak, and J. P. Dandale, "Facial recognition technology: An analysis with scope
in India," arXiv preprint arXiv:1005.4263, 2010.

[15] S. Das and J. Debbarma, "Designing a Biometric Strategy (Fingerprint) Measure for
Enhancing ATM Security in Indian e-banking System," International Journal of Information and
Communication, 2011.

[16] J. O. Adeoti, "Automated Teller Machine (ATM) Frauds in Nigeria: The Way Out," Journal
of Social Sciences, vol. 27, pp. 53-58, 2011.

[17] Kibona, Lusekelo. “Face Recognition as a Biometric Security for Secondary Password for
ATM Users. A Comprehensive Review.” International Journal of Scientific Research in Science
and Technology 1 (2015): 1-8.

[18] https://en.wikipedia.org/wiki/Python_(programming_language) (last accessed on 15-09-2020)

[19] https://en.wikipedia.org/wiki/PyCharm (last accessed on 15-09-2020)

[20] https://en.wikipedia.org/wiki/SQLite (last accessed on 15-09-2020)

[21] https://en.wikipedia.org/wiki/OpenCV (last accessed on 15-09-2020)

[22] https://en.wikipedia.org/wiki/NumPy (last accessed on 15-09-2020)

[23] https://en.wikipedia.org/wiki/CMake (last accessed on 15-09-2020)

[24] https://en.wikipedia.org/wiki/Dlib (last accessed on 15-09-2020)

[25] https://pypi.org/project/face-recognition/ (last accessed on 15-09-2020)

[26] https://www.twilio.com/docs/sms (last accessed on 15-09-2020)

[27] https://wiki.python.org/moin/TkInter (last accessed on 15-09-2020)

[28] https://en.wikipedia.org/wiki/Raspberry_Pi (last accessed on 15-09-2020)

Page | 47

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy