0% found this document useful (0 votes)
522 views45 pages

IoT BASED ATTENDANCE MONITORING SYSTEM PROJECT REPORT

This document is a project report for an IoT-based attendance monitoring system submitted by three students. It describes building a system that uses a Raspberry Pi camera to take live images of students in class, sends them to a cloud server for face recognition compared to stored images, and marks attendance in a web application with date and time. The system aims to automate attendance tracking while avoiding issues of existing fingerprint-based systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
522 views45 pages

IoT BASED ATTENDANCE MONITORING SYSTEM PROJECT REPORT

This document is a project report for an IoT-based attendance monitoring system submitted by three students. It describes building a system that uses a Raspberry Pi camera to take live images of students in class, sends them to a cloud server for face recognition compared to stored images, and marks attendance in a web application with date and time. The system aims to automate attendance tracking while avoiding issues of existing fingerprint-based systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

IoT BASED ATTENDANCE MONITORING SYSTEM

PROJECT REPORT

Submitted by

A. BALA GANESH 19DC03


S. NISHANTH 19DC16
N. VISHNU PRASAD 19DC26

Under the guidance of


Ms.K. Sudha

In partial fulfilment of the requirements for the award of


DIPLOMA IN COMPUTER NETWORKING
STATE BOARD OF TECHNICAL EDUCATION
GOVERNMENT OF TAMILNADU

June 2021-2022
DEPARTMENT OF COMPUTER NETWORKING
PSG POLYTECHNIC COLLEGE
(Autonomous and an ISO 9001 certified Institution)
COIMBATORE -641 004
IoT BASED ATTENDANCE MONITORING SYSTEM

PROJECT REPORT

Submitted by

A. BALA GANESH 19DC03


S. NISHANTH 19DC16
N. VISHNU PRASAD 19DC26

Under the guidance of


Ms.K. Sudha

In partial fulfilment of the requirements for the award of


DIPLOMA IN COMPUTER NETWORKING
STATE BOARD OF TECHNICAL EDUCATION
GOVERNMENT OF TAMILNADU

June 2021-2022
DEPARTMENT OF COMPUTER NETWORKING
PSG POLYTECHNIC COLLEGE
(Autonomous and an ISO 9001 certified Institution)
COIMBATORE -641 004
PSG POLYTECHNIC COLLEGE
(Autonomous and an ISO 9001 certified Institution)

DEPARTMENT OF COMPUTER NETWORKING


COIMBATORE-641004
CERTIFICATE
A. BALA GANESH 19DC03
S. NISHANTH 19DC16
N. VISHNU PRASAD 19DC26

This is to certify that the Project Report entitled


IoT BASED ATTENDANCE MONITORING SYSTEM
has been submitted by
A. BALA GANESH
S. NISHANTH
N. VISHNU PRASAD
In partial fulfillment for the award of

DIPLOMA IN COMPUTER NETWORKING


Of the State Board of Technical Education,
Government of Tamil Nadu.
During the academic year 2021-2022
Ms.K. Sudha Dr.S. Brindha
Faculty guide Head of the Department

Certified that the candidate was examined by us in the project viva-voce examination held
on……………….

(Internal Examiner) (External Examiner)


Synopsis

SYNOPSIS
In the Fingerprint based existing attendance system, a portable fingerprint device
need to be configured with the students fingerprint earlier. Later either during the lecture
hours or in the current era, where all the human tasks are performed by using various
technologies. Internet of Things (IoT) is a vast domain where multiple sensor-based devices
interact with each other to minimize human efforts and complexity. In the IoT-based
attendance monitoring system, a portable RFID device needs to be configured with the
student's RFID Tag earlier. Later either during the lecture hours or before, the student needs
to record the Face With mask on the configured device to ensure their attendance for the
day. The problem with this approach is that during lecture time it may distract the attention
of the students. Biometrics seem secure on the surface. After all, you’re the only one with
your Face. But that doesn’t necessarily make it more secure than passwords. A password
is inherently private because you are the only one who knows it. Of course, hackers can
acquire it by brute force attacks or phishing, but generally, people can’t access it. On the
other hand, biometrics are inherently public. Think about it: your ears, eyes, and face are
exposed. You reveal your eyes whenever you look at things. With Face recognition, you
leave Your Face everywhere you go. With voice recognition, someone is recording your
voice. Essentially, there’s easy access to all these identifiers.
This project is designed on the IoT based platform, The Raspberry PI Camera which
is connected to the raspberry pi camera slot. A live video stream of students is captured in
the class with a USB1 camera, Raspberry pi takes those images as input images and sends
them to the cloud server and we make use of a face recognition service to compare the input
images with the existing image which is already uploaded in the database. Matched images
are detected and attendance is marked with the date and time for students present in class
in the Web application. Unmatched images are denied. This process is carried out for every
period and students are given attendance accordingly.
A unique RFID card is given to the faculty, when the faculty enters the classroom and
swipes the RFID card, the RFID sensor scans, and the camera column will raise and take a
picture of their face, after this process the system gives 10 Seconds for wearing a mask and
sends the data to the database and is displayed on LCD. The web application is designed
for tracking their attendance. Student attendance will be monitored and if the student who is
not recognized in that system scans on the RFID reader, the LED displays the screen of
illegal access then the notification will send to the Administration.

i
Acknowledgement

ACKNOWLEDGEMENT
We would like to express our gratitude to Dr.B.GIRIRAJ, Principal, and PSG
Polytechnic College for motivating us to take up this project.
We are really grateful to Dr.S.BRINDHA, Head of the Department of Computer
Networking for her valuable support and encouragement.
We take this opportunity to express our heartfelt gratitude to our project Guide
Ms.K.SUDHA, Department of Computer Networking for her guidance motivation,
encouragement and hand help for completing this project successfully.
We also render our sincere thanks to teaching and non-teaching faculty of Computer
Networking department for their encouragement to complete this project.

ii
Contents

CONTENTS
CONTENTS Page No.
ACKNOWLEDGEMENT…………………………….….…………………………… (i)
SYNOPSIS……………………………………………………………………………. (ii)
TABLE OF CONTENTS.…………………………………………...……………….. (iii- iv)
LIST OF FIGURES…………………………………………………………………… (v)
LIST OF ABBREVATION (vi)
CHAPTER 1 …………………………………………………………………………... 1-4
1. INTRODUCTION
1.1. OVERVIEW OF THE PROJECT 2
1.2. EXISTING SYSTEM 2
1.3. PROBLEMS IN PROPOSED SYSTEM 2
1.4. OBJECTIVES OF THE PROJECT 3
1.5. ADVANTAGES 3
1.6. HARDWARE AND SOFTWARE REQUIREMENTS 4
1.7. OVERVIEW OF THE REPORT 4
CHAPTER 2………………………………………………………………………….. 5-8
2. IOT BASED ATTENDANCE MANAGEMENT SYSTEM
2.1. INTRODUCTION 5
2.2. RECOGNITION SYSTEM 5
2.3. LITERATURE SURVEY 6
2.4. PROPOSED SYSTEM 7
2.5. WORKING PRINCIPLE 7
CHAPTER 3……………………………………………………………………………. 9-19
3. CLIENT-SIDE DESIGN AND DEVELOPMENT
3.1. RASPBERRY PI 9
3.2. HARDWARE 9
3.3. PROCESSOR 10
3.4. OVER CLOCKING 10
3.5. RAM 11
3.6. NETWORKING 12
3.7. PERIPHERALS 12
3.8. GPIO CONNECTOR 13
3.9. ACCESSORIES 13
3.10. OPERATING SYSTEMS 14
3.11. OTHER OPERATING SYSTEMS 15

iii
Contents
3.12. PLANNED OPERATING SYSTEMS 16
3.13. RECEPTION AND USE 17
3.14. COMMUNITY 18
3.15. USE IN EDUCATION 18
CHAPTER 4………………………………………………………………………..… 20-28
4. SERVER-SIDE DESIGN AND DEVELOPMENT
4.1. ALGORITHM AND METHODS 20
4.2. HISTORY EXTRACTION 23
4.3. OPEN CV 24
4.4. APPLICATIONS 25
4.5. PROGRAMMING LANGUAGE 26
4.6. HARDWARE ACCELERATION 26
4.7. OS SUPPORT 26
4.8. COMPARISON BETWEEN OPENCV AND MATLAB 27
CHAPTER 5………………………………………………………………………….. 29-33
5. RESULT
5.1. CAMERA & RFID READER 29
5.2. CLOUD – THINKSPEAK & FIREBASE 31
5.3. CUSTOM WEB PAGE 32
CONCLUSION………………..………………………………………….……….….. (vii)
BIBLIOGRAPHY..…………………………………………………………….……… (viii)

iv
List Of Figures

LIST OF FIGURES
Fig. No Name Page No
2.1 Block Diagram 7
2.2 Flow Diagram 8
3.1 Raspberry Pi diagram 9
3.2 Location of connectors and ICs on original 11
Raspberry Pi Model B
3.3 Location of connectors and ICs on Raspberry Pi 12
B+ rev 1.2, and Raspberry Pi 2 Model B
3.4 Scheme of the implemented APIs: OpenMAX, 16
OpenGL ES and OpenVG
4.1 Types of Haar Features 20
4.2 Haar features applied on an image 21
4.3 Formula 1 21
4.4 LBP algorithm 22
4.5 Formula 2 22
4.6 Formula Flow Chart 22
4.7 Histogram extraction and concatenation 23
4.8 Formula 3 23
4.9 Blob Detection 25
5.1 Raspberrypi with Sensors 29
5.2 Welcome Display 29
5.3 Placing the RFID Card 30
5.4 RFID Card is matched 30
5.5 Mask Verification 30
5.6 Attendance Marked 30
5.7 Illegal card Access 30
5.8 SMS Notification to Admin 30
5.9 RFID card id graph 31
5.10 Face id graph 31
5.11 Firebase database 31
5.12 Login Page 32
5.13 Dash Board Screen 32
5.14 Report Page 33
v
List of Abbreviation

LIST OF ABBREVIATION

Abbreviation Expansion

MAS Manual Attendance System

AAS Automated Attendance System

RFID Radio Frequency Identification

CCTV Closed-Circuit Television

LCD Liquid Crystal Display

GUI Graphical User Interface

GSM Global System for Mobile communication

GPS Global Positioning System

USB Universal Serial Bus

CV Computer Program

CPU Central Processor Unit

RAM Random Access Memory

GPIO General-purpose input/output

HAT Hardware Attached on Top

EMP Electromagnetic Pulse

GCSE General Certificate of Secondary Education

LBP Local Binary Pattern

vi
Chapter 1 Introduction

CHAPTER 1
INTRODUCTION
The technology aims in imparting a tremendous knowledge oriented technical
innovations these days. Deep Learning is one among the interesting domain that enables
the machine to train itself by providing some datasets as input and provides an appropriate
output during testing by applying methods. Nowadays Attendance is considered as an
important factor for both the student as well as the teacher of an educational organization.
With the advancement of the technology the machine automatically detects the attendance
performance of the students and maintains a record of those collected data.
In general, the attendance system of the student can be maintained in two different
forms namely,
 Manual Attendance System (MAS)
 Automated Attendance System (AAS)
Manual Student Attendance Management system (MAS) is a process where a
teacher concerned with the particular subject need to call the students name and mark the
attendance manually. Manual attendance may be considered as a time-consuming process
or sometimes it happens for the teacher to miss someone or students may answer multiple
times on the absence of their friends. So, the problem arises when we think about the
traditional process of taking attendance in the classroom. To solve all these issues we go
with Automatic Attendance System (AAS).
Automated Attendance System (AAS) is a process to automatically estimate the
presence or the absence of the student in the classroom by using face recognition
technology. It is also possible to recognize whether the student is sleeping or awake during
the lecture and it can also be implemented in the exam sessions to ensure the presence of
the student. The presence of the students can be determined by capturing their faces on to
a high-definition monitor video streaming service, so it becomes highly reliable for the
machine to understand the presence of all the students in the classroom
. The two common Human Face Recognition techniques are,
 Feature-based approach
 Brightness-based approach
The Feature-based approach also known as local face recognition system, used in
pointing the key features of the face like eyes, ears, nose, mouth, edges, etc., whereas the
brightness-based approach also termed as the global face recognition system, used in
recognizing all the parts of the image.

1
Chapter 1 Introduction
1.1 OVERVIEW OF THE PROJECT
This project is designed on the IoT based platform, The Raspberry PI Camera which is
connected to the raspberry pi camera slot. A live video stream of students is captured in the
class with a USB1 camera, Raspberry pi takes those images as input images and sends
them to the cloud server and we make use of a face recognition service to compare the input
images with the existing image which is already uploaded in the database. Matched images
are detected and attendance is marked with the date and time for students present in class
in the Web application. Unmatched images are denied. This process is carried out for every
period and students are given attendance accordingly.
A unique RFID card is given to the faculty, when the faculty enters the classroom and
swipes the RFID card, the RFID sensor scans, and the camera column will raise and take a
picture of their face, after this process the system gives 10 Seconds for wearing a mask and
sends the data to the database and is displayed on LCD. The web application is designed
for tracking their attendance. Student attendance will be monitored and if the student who is
not recognized in that system scans on the RFID reader, the LED displays the screen of
illegal access then the notification will send to the Administration.
1.2 EXISTING SYSTEM
In existing system, attendance marking involves manual attendance on the paper
sheet by professors and teachers, but it is a very time-consuming process and chances of
proxy are also an issue that arises in such type of attendance marking. Also, there is an
attendance marking system such as RFID (Radio Frequency Identification), Biometrics etc.
But these systems are currently not that popular in schools and classrooms for students.
1.3 PROBLEMS IN EXISTING SYSTEM
Manual systems put pressure on people to be correct in all details of their work at all times,
the problem being that people aren't perfect, however each of us wishes we were.
 These attendance systems are manual.
 There is always a chance of forgery (one person singing the presence of the other
one) since these are manually so there is a great risk of error.
 More manpower is required.
 Calculations related to attendance are done manually (total classes attended in a
month) which is prone to error.
 It is difficult to maintain a database or register in manual systems.
 It is difficult to search for a particular data from this system (especially if that data,
we are asking for, is of very long ago).

2
Chapter 1 Introduction
 The ability to compute the attendance percentage becomes a major task as manual
computation produces errors, and also wastes a lot of time.
 This method could easily allow for impersonation and the attendance sheet could
be stolen or lost.
The primary goal is to help the lecturers, improve and organize the process of track
and manage student attendance and absenteeism. Additionally, it seek to Provides a
valuable attendance service for both teachers and students.
 Reduce manual process errors by provide automated and a reliable attendance
system uses face recognition technology.
 Increase privacy and security which student cannot presenting himself or his friend
while they are not.
 Produce monthly reports for lecturers.
 Flexibility, Lecture’s capability of editing attendance records.
 Calculate absenteeism percentage and send reminder messages to students
1.4 OBJECTIVIES OF THE PROJECT
The primary goal is to help the lecturers, improve and organize the process of track
and manage student attendance and absenteeism. Additionally, it seek to
 Provides a valuable attendance service for both teachers and students.
 Reduce manual process errors by provide automated and a reliable attendance
system uses face recognition technology.
 Increase privacy and security which student cannot presenting himself or his friend
while they are not.
 Produce monthly reports for lecturers.
 Flexibility, Lecture’s capability of editing attendance records.
 Calculate absenteeism percentage and send reminder messages to students.
1.5 ADVANTAGES
The main motivation for this project was the slow and inefficient traditional manual
attendance system. So, why not make it automated fast and much efficiently. Also, such
face detection techniques are in use by the department of a criminal investigation where the
usage of CCTV footages and detecting the faces from the crime scene and comparing them
with criminal database to recognize them.
It is also becoming as a feature of daily life, where authorities are using it on the
streets, in subway station, and at airports.

3
Chapter 1 Introduction

1.6 HARDWARE AND SOFTWARE REQUIREMENTS


Software Used:
 Operating System : Windows 7 / 8/ 10
 Language : Python
 IDE : Anaconda, Notebook
Hardware Used:
 Processor : Intel core i3
 Ram : 2 GB
 Hard Disk : 120 GB
 Raspberry Pi : Raspberry Pi 3 Model B Board
 Pi Camera : Raspberry Pi 5MP camera
 RFID Reader : RFID EM-18 Module
 LCD : 20*4
1.7 OVERVIEW OF REPORT
 The Chapter 1 explains what the project is all about. It includes the objectives
of the project, challenges of student and staff attendance monitoring.
 The Chapter 2 contains the concept of the project. It includes related works,
proposed systems, implementation and the working of project is also explained.
 The Chapter 3 contains the Front-end Development. This chapter discusses
the tool that are used in the design and development.
 The Chapter 4 contains the Back-end Development. This chapter discusses
the tool design and development of Back-end server.
 The Chapter 5 contains the Result of the project.

4
Chapter 2 IoT based attendance monitoring system

CHAPTER 2
IOT BASED ATTENDANCE MONITORING SYSTEM
Since olden days, the method of recording attendance is always manual. Even
though this develops the student-teacher relationship and binds them together, it is time
consuming and prone to human errors. This also becomes stressful at times. In order to
make it error free and reduce the wastage of time, it is necessary to implement Automatic
Attendance Management System thus making it more efficient and effective.
2.1 INTRODUCTION
The technology aims in imparting a tremendous knowledge oriented technical
innovations these days. Deep Learning is one among the interesting domain that enables
the machine to train itself by providing some datasets as input and provides an appropriate
output during testing by applying methods. Nowadays Attendance is considered as an
important factor for both the student as well as the teacher of an educational organization.
With the advancement of the technology the machine automatically detects the attendance
performance of the students and maintains a record of those collected data.
2.2 RECOGNITION SYSTEM
Face recognition is a biometric technique which involves determining if the image of
the face of any given person matches any of the face images stored in a database. This
problem is hard to solve automatically due to the changes that various factors, such as facial
expression, aging and even lighting, can cause on the image. Among the different biometric
techniques facial recognition may not be the most reliable but it has several advantages
over the others. It is widely used in various areas such as security and access control,
forensic medicine, police controls and in attendance management system.
1. Signature based System
2. Fingerprint based System
3. Iris Recognition
4. RFID based System
5. Face Recognition
Amongst the above techniques, Face Recognition is natural, easy to use and does not
require aid from the test subject.
1. It is a series of several related problems which are solved step by step: 1. to capture
a picture and discern all the faces in it.

5
Chapter 2 IoT based attendance monitoring system
2. Concentrate on one face at a time and understand that even if a face is turned in a
strange direction or in bad lighting, it is still the same person.
3. Determine various unique features of the face that can help in distinguishing it from
the face of any other person. These characteristics could be the size eyes, nose,
length of face, skin color, etc.
4. Compare these distinctive features of that face to all the faces of people we already
know to find out the person’s name.
Our brain, as a human is made to do all of this automatically and instantaneously.
Computers are incapable of this kind of high-level generalization, so we need to teach or
program each step of face recognition separately. Face recognition systems fall into two
categories: verification and identification.
Face verification is a 1:1 match that compares a face image against a template face images,
whose identity is being claimed. On the contrary, face identification is a 1: N problem that
compares a query face image.
2.3 LITERATURE SURVEY
[1] Gadekar, Dipak, SanyuktaGhorpade, VishakhaShelar, and Ajay Paithane. "IoT
BASED ATTENDANCE MONITORING SYSTEM USING FACE AND FINGERPRINT."
(2018), the camera will now capture your image. If the image detected matches to the
sample image for the database. The attendance marked as present on the LCD. If the
camera fails the fingerprint scanner will be activated and the attendance will be marked
present
[2] Pasumarti, Priya, and P. Purna Sekhar. "Classroom Attendance Using Face
Detection and Raspberry-Pi." International Research Journal of Engineering and
Technology (IRJET) 5, The finger print module then we are taking the sample of there are
four finger print and they are detected along the images when the user that should keep
finger on the module and it will be scanning and compared is already stored in the memory
the person whose finger print is matched their will be marking present and not matching the
sms send to his parents.
[3] Bhattacharya, Shubhobrata, Gowtham Sandeep Nainala, Prosenjit Das, and
AurobindaRoutray. "Smart Attendance Monitoring System (SAMS): A Face Recognition
Based Attendance System for Classroom Environment." In 2018 IEEE 18th International
Conference on Advanced Learning Technologies (ICALT), pp. 358-360. IEEE, 2018, When
the Student images are stored the database. The raspberry pi camera module is placing the
student entering in the class room. The USB camera module is capture the student image.

6
Chapter 2 IoT based attendance monitoring system
The system will automatically update the student presence in the class to the students
database and sends message to guardians of absentees and also to head of department.
[4] Soniya, V., R. Swetha Sri, K. SwethaTitty, R. Ramakrishnan, and S. Sivakumar.
"Attendance automation using face recognition biometric authentication." In Power and
Embedded Drive Control (ICPEDC), 2017 International Conference on, pp. 122-127. IEEE,
2017, the system help to automatically compute the percentage of attendance of each
individual student. The GUI of user list function for adding and removing the student’s
personal details.
2.4 PROPOSED SYSTEM
Biometric Identification Systems are widely used for unique identification of humans
mainly for verification and identification. Biometrics is used as a form of identity access
management and access control. So the use of biometrics in the student attendance
management system is a secure approach. There are many type of biometric system like
fingerprint recognition, face recognition, voice recognition, iris recognition, palm recognition
etc. In this project, we have used face recognition system.

Fig.2.1 Block diagram


2.5 WORKING PRINCIPLE
The proposed system uses USB Camera which is connected to the IoT devices such
as Raspberry Pi or Arduino camera slot. Live video stream of employees is captured with
USB camera, Raspberry pi or Arduino takes those images as input images and sends to the
cloud server and make use of face recognition service to compare the input images with the
existing image which is already uploaded in the database. Matched images are detected
and attendance is marked with date and time for employees present in the local data base

7
Chapter 2 IoT based attendance monitoring system
using MYSQL. Unmatched images are denied. This process is carried out for every period
and employees are given attendance accordingly.
This process happens by importing the open CV packages. Open CV is the open-
source library which is used for the computer vision, machine learning, and image
processing and now it plays a major role in real-time operation. And a unique RFID card is
given to the employees, when employees enter the room and swipes the RFID card, the
RFID sensor scans and sends the data to the database and displayed on LED display. A
suitable application is developed for the tracking of their attendance.

Fig.2.2 Flow diagram


Admin tracks the attendance of the employees periodically or whenever required by
the administration and finds the result. The result is displayed on the monitor screen and
stored in the database. Attendance will be monitored and if the system founds unauthorized
intake of scan suddenly notification will be sent to the Higher Officer.

8
Chapter 3 Client side design and development

CHAPTER 3
CLIENT SIDE DESIGN AND DEVELOPMENT
Front-end web development, also known as client-side development is the Practice
of producing HTML, CSS and JavaScript for a website or Web Application so That a user
can see and interact with them directly. The challenge associated with front End
development is that the tools and techniques used to create the front end of a Website
change constantly and so the developer needs to constantly be aware of how the field is
developing.
3.1 Raspberry Pi
The Raspberry Pi is a series of credit card sized developed in the UK by the
Raspberry Pi Foundation with the intention of promoting the teaching of basic computer
science in schools. The original Raspberry Pi and Raspberry Pi 2 are manufactured in
several board. The hardware is the same across all manufacturers. The original Raspberry
Pi is based on the Broadcom BCM2835 system on a chip (SoC), which includes an
ARM1176JZF-S700 MHz processor, Video Core IV GPU, and was originally shipped with
256 megabytes of RAM, later upgraded (models B and B+) to 512 MB.
The system has Secure Digital (SD) (models A and B) or MicroSD (models A+ and
B+) sockets for boot media and persistent storage. In 2014, the Raspberry Pi Foundation
launched the Compute Module, which packages a BCM2835 with 512 MB RAM and flash
chip into a module for use as a part of embedded systems.
3.2 HARDWARE

Fig.3.1 Raspberry Pi diagram


In the above block diagram for model A, B, A+, B+; model A and A+ have the lowest
two blocks and the rightmost block missing (note that these three blocks are in a chip that
actually contains a three-port USB hub, with a USB Ethernet adapter connected to one of
its ports). In model a and A+ the USB port is connected directly to the SoC.

9
Chapter 3 Client side design and development
On model B+ the chip contains a five point hub, with four USB ports fed out, instead
of the two on model B.
3.3 PROCESSOR
The SoC used in the first generation Raspberry Pi is somewhat equivalent to the chip
used in older smartphones (such as iPhone / 3G / 3GS). The Raspberry Pi is based on the
Broadcom BCM2835 system on a chip (SoC). Which includes a 700 MHz ARM1176JZF-S
processor, Video Core IV GPU, and RAM. It has a Level 2 cache of 128 KB, used primarily
by the GPU, not the CPU. The SoC is stacked underneath the RAM chip, so only its edge
is visible.
While operating at 700 MHz by default, the first generation Raspberry Pi provided a
real world performance roughly equivalent to 0.041 GFLOPS On the CPU level the
performance is similar to a 300 MHz Pentium II of 1997-1999. The GPU provides 1 Gpixel/s
or 1.5 Gtexel/s of graphics processing or 24 GFLOPS of general purpose computing
performance. The graphics capabilities of the Raspberry Pi are roughly equivalent to the
level of performance of the Xbox.
3.4 OVER CLOCKING
The first generation Raspberry Pi chip operated at 700 MHz by default and did not
become hot enough to need a heat sink or special cooling, unless the chip was overclocked.
The second generation runs on 900 MHz by default, and also does not become hot enough
to need a heatsink or special cooling, again overclocking may heat up the SoC more than
usual. Most Raspberry Pi chips could be overclocked to 800 MHz and some even higher to
1000 MHz There are reports the second generation can be similarly overclocked, in extreme
cases, even to 1500 MHz (discarding all safety features and over voltage limitations).
In the Raspbian Linux distro the overclocking options on boot can be done by a
software command running "sudo raspi-config" without voiding the warranty. In those cases
the Pi automatically shuts the overclocking down in case the chip reaches 85 °C (185 °F),
but it is possible to overrule automatic over voltage and overclocking settings (voiding the
warranty). In that case, one can try putting an appropriately sized heatsink on it to keep the
chip from heating up far above 85 °C.
Newer versions of the firmware contain the option to choose between five overclock
("turbo") presents that when turned on try to get the most performance out of the SoC without
impairing the lifetime of the Pi. This is done by monitoring the core temperature of the chip,
and the CPU load, and dynamically adjusting clock speeds and the core voltage. When the
demand is low on the CPU, or it is running too hot, the performance is throttled, but if the
CPU has much to do, and the chip's temperature is acceptable, performance is temporarily
10
Chapter 3 Client side design and development
increased, with clock speeds of up to 1 GHz, depending on the individual board, and on
which of the turbo settings is used.
The five settings are:
1. None; 700 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvault,
2. Modest; 800 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvault,
3. Medium; 900 MHz ARM, 250 MHz core, 450 MHz SDRAM, 2 overvault,
4. High; 950 MHz ARM, 250 MHz core, 450 MHz SDRAM, 6 overvault,
5. Turbo; 1000 MHz ARM, 500 MHz core, 600 MHz SDRAM, 6 oversold
In the highest present the SDRAM clock was originally 500 MHz, but this was later changed
to 600 MHz because 500 MHz sometimes causes SD card corruption. Simultaneously in
high mode the core clock speed was lowered from 450 to 250 MHz, and in medium mode
from 333 to 250 MHz.
3.5 RAM
On the older beta model B boards, 128 MB was allocated by default to the GPU,
leaving 128 MB for the CPU. On the first 256 MB release model B (and model A), three
different splits were possible.

Fig.3.2 Location of connectors and ICs on original Raspberry Pi Model B.


The default split was 192 MB (RAM for CPU), which should be sufficient for
standalone 1080p video decoding, or for simple 3D, but probably not for both together. 224
MB was for Linux only, with just a 1080p framebuffer, and was likely to fail for any video or
3D. 128 MB was for heavy 3D, possibly also with video decoding (e.g. XBMC).
Comparatively the Nokia 701 uses 128 MB for the Broadcom Video Core IV.
For the new model B with 512 MB RAM initially there were new standard memory
split files released( arm256_start.elf, arm384_start.elf, arm496_start.elf) for 256 MB, 384
MB and 496 MB CPU RAM (and 256 MB, 128 MB and 16 MB video RAM).

11
Chapter 3 Client side design and development
But a week or so later the RPF released a new version of start. Elf that could read a
new entry in config.txt (gpu_mem=xx) and could dynamically assign an amount of RAM
(from 16 to 256 MB in 8 MB steps) to the GPU, so the older method of memory splits became
obsolete, and a single start. Elf worked the same for 256 and 512 MB Pis. The second
generation has 1 GB of RAM.

Fig.3.3 Location of connectors and ICs on Raspberry Pi B+ rev 1.2, and Raspberry Pi 2
Model B.
3.6 NETWORKING
Though the model A and A+ do not have an 8P8C ("RJ45") Ethernet port, they can
be connected to a network using an external user-supplied USB Ethernet or Wi-Fi adapter.
On the model B and B+ the Ethernet port is provided by a built-in USB Ethernet adapter.
3.7 PERIPHERALS
Generic USB keyboards and mice are compatible with the Raspberry Pi.

GPIO# 2nd fun pin# pin# 2nd fun GPIO#

- +3V3 1 2 +5V -

GPIO2 SDA1 (I2C) 3 4 +5V -

GPIO3 SCL1 (I2C) 5 6 GND -

GPIO4 GCLK 7 8 TXD0 (UART) GPIO14

- GND 9 10 RXD0 (UART) GPIO15

GPIO17 GEN0 11 12 GEN1 GPIO18

GPIO27 GEN2 13 14 GND -

GPIO22 GEN3 15 16 GEN4 GPIO23

- +3V3 17 18 GEN5 GPIO24

12
Chapter 3 Client side design and development

GPIO10 MOSI (SPI) 19 20 GND -

GPIO9 MISO (SPI) 21 22 GEN6 GPIO25

GPIO11 SCLK (SPI) 23 24 CE0_N (SPI) GPIO8


3.8 GPIO
- GND 25 26 CE1_N (SPI) GPIO7

(Models A and B stop here)

EEPROM ID_SD 27 28 ID_SC EEPROM

GPIO5 - 29 30 GND -

GPIO6 - 31 32 - GPIO12

GPIO13 - 33 34 GND -

GPIO19 - 35 36 - GPIO16

GPIO26 - 37 38 - GPIO20

- GND 39 40 - GPIO21
CONNECTOR
RPi A+, B+ and 2B GPIO J8 40-pin pinout. Models A and B have only the first 26
pins. Model B rev 2 also has a pad P6 of 8 pins offering access to an additional 4 GPIO
connections.
Function 2nd fun pin# pin# 2nd fun Function

- +5V 1 2 +3V3 -

GPIO28 GPIO_GEN7 3 4 GPIO_GEN8 GPIO29

GPIO30 GPIO_GEN9 5 6 GPIO_GEN10 GPIO31

- GND 7 8 GND -
Models A and B provide GPIO access to the ACT status LED using GPIO 16. Models
A+ and B+ provide GPIO access to the ACT status LED using GPIO 47, and the Power
status LED using GPIO 35.
3.9 ACCESSORIES
Camera – On 14 May 2013, the foundation and the distributors RS Components &
Premier Farnell/Element 14 launched the Raspberry Pi camera board with a firmware
update to accommodate it. The camera board is shipped with a flexible flat cable that plugs
into the CSI connector located between the Ethernet and HDMI ports. In Raspbian, one
enables the system to use the camera board by the installing or upgrading to the latest

13
Chapter 3 Client side design and development
version of the OS and then running Raspi-config and selecting the camera option. The cost
of the camera module is 20 EUR in Europe (9 September 2013). It can produce 1080p,
720p, 640x480p video. The footprint dimensions are25 mm x 20 mm x 9 mm.
Gertboard – A Raspberry Pi Foundation sanctioned device, designed for educational
purposes, that expands the Raspberry Pi's GPIO pins to allow interface with and control of
LEDs, switches, analogue signals, sensors and other devices. It also includes an optional
Arduino compatible controller to interface with the Pi.
Infrared Camera – In October 2013, the foundation announced that they would begin
producing a camera module without an infrared filter, called the Pi NoIR.
HAT (Hardware Attached on Top) expansion boards – Together with the model
B+, inspired by the Arduino shield boards, the interface for HAT boards was devised by the
Raspberry PI Foundation. Each HAT board carries a small EEPROM (typically a
CAT24C32WI-GT3) containing the relevant details of the board, so that the Raspberry PI's
OS is informed of the HAT, and the technical details of it, relevant to the OS using the HAT.
Mechanical details of a HAT board, that use the four mounting holes in their rectangular
formation, are here Software.
3.10 Operating systems
The Raspberry Pi primarily uses Linux-kernel-based operating systems. The ARM11
chip at the heart of the Pi (pre-Pi 2) is based on version 6 of the ARM. The current releases
of several popular versions of Linux, including Ubuntu, will not run on the ARM11.
It is not possible to run Windows on the original Raspberry Pi, though the new
Raspberry Pi 2 will be able to run Windows 10. The Raspberry Pi 2 currently only supports
Ubuntu Snappy Core, Raspbian, Open ELEC and RISC OS.The install manager for the
Raspberry Pi is NOOBS.
The operating systems included with NOOBS are:
 Arch Linux ARM
 Open ELEC
 Pidora (Fedora Remix)
 Puppy Linux Raspbmc and the XBMC open source digital media centre
 RISC OS – The operating system of the first ARM-based computer
Raspbian (recommended for Raspberry Pi 1) – Maintained independently of the
Foundation; based on the ARM hard-float (armhf) Debian 7 'Wheezy' architecture port
originally designed for ARMv7 and later processors compiled for the more limited ARMv6
instruction set of the Raspberry Pi.

14
Chapter 3 Client side design and development
A minimum size of 4 GB SD card is required. There is a Pi Store for exchanging
programs. The Raspbian Server Edition is a stripped version with fewer software packages
bundled as compared to the usual desktop computer oriented Raspbian.
The Wayland display server protocol enable the efficient use of the GPU for hardware
accelerated GUI drawing functions. On 16 April 2014 a GUI shell for Weston called Maynard
was released.
 PiBang Linux is derived from Raspbian.
Raspbian for Robots - A fork of Raspbian for robotics projects with LEGO, Grove, and
Arduino.
3.11 OTHER OPERATING SYSTEMS
 Xbian using the Kodi (formerly XBMC) open source digital media centre opens USE
Raspberry Pi Fedora Remix.
 Slackware ARM – Version 13.37 and later runs on the Raspberry Pi without
modification. The 128–496 MB of available memory on the Raspberry Pi is at least
twice the minimum requirement of 64 MB needed to run Slackware Linux on an ARM
or i386 system. (Whereas the majority of Linux systems boot into a graphical user
interface, Slackware's default user environment is the textual shell / command line
interface.)
 The Flux box window manager running under the X Window System requires an
additional 48 MB of RAM.
 FreeBSD and NetBSD
 Plan 9 from Bell Labs and Inferno (in beta)
 Moebius – A light ARM HF distribution based on Debian. It uses Raspbian repository,
but it fits in a 128 MB SD card. It has just minimal services and its memory usage is
optimized to keep a small footprint.
 OpenWrt – Primarily used on embedded devices to route network traffic.
 Kali Linux – A Debian-derived distro designed for digital forensics and penetration
testing.
 Instant Web Kiosk – An operating system for digital signage purposes (web and
media views)
 Ark OS – Website and email self-hosting
 Minion – Dedicated operating system for mining cryptocurrency
 Kano OS http://kano.me/downloads
 Nard SDK For industrial embedded systems

15
Chapter 3 Client side design and development
 Sailfish OS with Raspberry Pi 2 (due to used ARM Cortex-A7 CPU; Raspberry Pi 1
uses different ARMv6 architecture and Sailfish requires ARMv7)
3.12 PLANNED OPERATING SYSTEMS
Windows 10 – Microsoft announced February 2015 it will offer a free version of the
to-be-released Windows 10 running natively on the Raspberry Pi Driver APIs
Raspberry Pi can use a Video Core IV GPU via a binary blob, which is loaded into
the GPU at boot time from the SD-card, and additional software that initially was closed
source. This part of the driver code was later released, however much of the actual driver
work is done using the closed source GPU code.
Application software uses calls to closed source run-time libraries (Open Max,
OpenGL ES or OpenVG) which in turn calls an open source driver inside the Linux kernel,
which then calls the closed source Video Core IV GPU driver code. The API of the kernel
driver is specific for these closed libraries.

Fig.3.4 Scheme of the implemented APIs.


Video Applications use OpenMAX, 3D applications use OpenGL Esand 2D applications use
Open VG which both in turn use EGL. OpenMAX and EGL use the open source kernel driver
in turn.
THIRD PARTY APPLICATION SOFTWARE
 Mathematica – Since 21 November 2013, Raspbian includes a full installation of this
proprietary software for free as of 1 August 2014 the version is Mathematica 10.
 Minecraft – Released 11 February 2013; a version for the Raspberry Pi, in which
you can modify the game world with code.
16
Chapter 3 Client side design and development

3.13 RECEPTION AND USE


Technology writer Glyn Moody described the project in May 2011 as a "potential BBC
Micro 2.0", not by replacing PC compatible machines but by supplementing them. In March
2012 Stephen Pritchard echoed the BBC Micro successor sentiment in ITPRO. Alex Hope,
co-author of the Next Gen report, is hopeful that the computer will engage children with the
excitement of programming.
Co-author Ian Livingstone suggested that the BBC could be involved in building
support for the device, possibly branding it as the BBC Nano. Chris Williams, writing in The
Register sees the inclusion of programming languages such as Kids Ruby, Scratch and
BASIC as a "good start" to equip kids with the skills needed in the future – although it
remains to be seen how effective their use will be.
The Centre for Computing History strongly supports the Raspberry Pi project, feeling
that it could "usher in a new era". Before release, the board was showcased by ARM's CEO
Warren East at an event in Cambridge outlining Google's ideas to improve UK science and
technology education.
Harry Fair head, however, suggests that more emphasis should be put on improving
the educational software available on existing hardware, using tools such as Google App
Inventor to return programming to schools, rather than adding new hardware choices. Simon
Rickman, writing in a ZDNet blog, was of the opinion that teens will have "better things to
do", despite what happened in the 1980s.
In October 2012, the Raspberry Pi won T3's Innovation of the Year award, and futurist
Mark Pesce cited a (borrowed) Raspberry Pi as the inspiration for his ambient device project
Moores Cloud. In October 2012, the British Computer Society reacted to the announcement
of enhanced specifications by stating, "It’s definitely something we'll want to sink our teeth
into."
In February 2015, a switched-mode power supply chip, designated U16, of the
Raspberry Pi 2 model B version 1.1 (the initially released version) was found to be
vulnerable to flashes of light, particularly the light from xenon camera flashes and green[
and red laser pointers.
However, other bright lights, particularly ones that are on continuously, were found
to have no effect. The symptom was the Raspberry Pi 2 spontaneously rebooting or turning
off when these lights were flashed at the chip. Initially, some users and commenters
suspected that the electromagnetic pulse from the xenon flash tube was causing the
problem by interfering with the computer's digital circuitry, but this was ruled out by tests
17
Chapter 3 Client side design and development
where the light was either blocked by a card or aimed at the other side of the Raspberry Pi
2, both of which did not cause a problem. The problem was narrowed down to the U16 chip
by covering first the system on a chip (main processor) and then U16 with opaque poster
mounting compound.
Light being the sole culprit, instead of EMP, was further confirmed by the laser pointer
tests, where it was also found that less opaque covering was needed to shield against the
laser pointers than to shield against the xenon flashes. The U16 chip seems to be bare
silicon without a plastic cover (i.e. a chip-scale package or wafer-level package), which
would, if present, block the light. Based on the facts that the chip, like all semiconductors, is
light-sensitive (photovoltaic effect), that silicon is transparent to infrared light, and that xenon
flashes emit more infrared light than laser pointers (therefore requiring more light shielding).
It is currently thought that this combination of factors allows the sudden bright infrared
light to cause an instability in the output voltage of the power supply, triggering shutdown or
restart of the Raspberry Pi 2. Unofficial workarounds include covering U16 with opaque
material (such as electrical tape, lacquer, poster mounting compound, or even balled-up
bread, putting the Raspberry Pi 2 in a case, and avoiding taking photos of the top side of
the board with a xenon flash.
This issue was not caught before the release of the Raspberry Pi 2 because while
commercial electronic devices are routinely subjected to tests of susceptibility to radio
interference, it is not standard or common practice to test their susceptibility to optical
interference.
3.14 COMMUNITY
The Raspberry Pi community was described by Jamie Ayre of FLOSS software
company AdaCore as one of the most exciting parts of the project. Community blogger
Russell Davis said that the community strength allows the Foundation to concentrate on
documentation and teaching. The community is developing fanzines around the platform,
such as The Magpie. A series of community Raspberry Jam events have been held across
the UK and further afield, led by Alan Donohoe, principal teacher of ICT at Our Lady's High
School, Preston, and a teacher-led community from Raspberry Jam has started building a
crowdsourced scheme of work.
3.15 USE IN EDUCATION
As of January 2012, enquiries about the board in the United Kingdom have been
received from schools in both the state and private sectors, with around five times as much
interest from the latter. It is hoped that businesses will sponsor purchases for less
advantaged schools. The CEO of Premier Farnell said that the government of a country in
18
Chapter 3 Client side design and development
the Middle East has expressed interest in providing a board to every schoolgirl, in order to
enhance her employment prospects.
The Raspberry Pi Foundation and Oxford, Cambridge and RSA Examinations
launched a beta of the Cambridge GCSE Computing Online course or MOOC (Massive
Open Online Course) based around the current GCSE Computing syllabus. The MOOC will
consist of videos, animations and interactive tasks on every part of the curriculum presented
by UK teachers. The beta is currently presented by Clive Beale who is the Head of
Educational Development. All tasks will be supported by written materials and audio and
text transcripts available for disabled students.
 The first MOOC will be linked to a formal GCSE qualification.
Oxford, Cambridge and RSA Examinations also provide resources to use with a
Raspberry Pi for teachers who would like to use the device in their lessons including Getting
started, Singing Jelly Baby and other features about the Raspberry Pi.
These annotations are part of the 68 point iBUG 300 -W dataset with which the dib facial
landmark predictor was trained.
It’s important to note that other options of facial landmark detectors exist, including
the 194 point model that is trained on the HELEN dataset. Regardless of which dataset is
used, the same dlib framework can be leveraged to train a shape predictor on the input
training data which can be useful if one likes to train facial landmark detectors or custom
shape predictors.

19
Chapter 4 Sever side design and development

CHAPTER 4
SERVER SIDE DESIGN AND DEVELOPMENT
A backend developer is one who makes use of the technology required to
develop the products for the backend of any website. A backend developer is responsible
for building the structure of a software application. Backend developers typically work in
groups or with a team.
4.1 ALGORITHM AND METHODS
Using the Haar feature-based cascade classifiers is an effective object detection
method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection
using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based
approach where a cascade function is trained from a lot of positive and negative images.
It is then used to detect objects in other images.

Fig.4.1 Types of Haar Features


A weak classifier is a classifier which is only slightly better than a random
prediction. A Haar-like feature is a rectangle which is split into two, three or four
rectangles. Each rectangle is black or white. A Haar cascade classifier is a classifier
which trains a machine learning for detecting objects in a picture or a video. Haar
belongs to Haar-like features which is a weak classifier and will be used for the face
recognition.
The extracted combination of features will be used for detecting faces in pictures
or a video. The features are tried to be matched only in a block of pixels defined by a
scale. The scale can be a square of 232x232 pixels which is the dimensions of the
image taken for feature extraction in our system. Each feature of the combination will
20
Chapter 4 Sever side design and development

be tried to be matched block by block. If one of the features does not appear in the
block, the research in it will be stopped.

Fig.4.2 Haar features applied on an image


The remaining features will not be tested because the machine concludes that
there is no face in this block. Then, a new block is taken, and the process is once again
repeated. This method tests all the blocks of pixels with the researched combination in
cascade classifier. In the images shown above where features are placed.
Local Binary Pattern (LBP)
For facial recognition process LBP algorithm is used. A great advantage of LBP
is that it is illumination invariant. If you change the lighting on the scene all the pixel
values will go up but the relative difference between these values will be the same.
Local Binary Pattern (LBP) is an efficient texture operator which labels the pixels of an
image by thresholding the neighbourhood of each pixel and considers the result as a
binary number. When LBP is combined with histograms of oriented gradients (HOG)
descriptor, it improves the detection performance considerably on some datasets.
For LBP 8 neighbours are used in our system. 3. Grid X: the number of cells in
the horizontal direction. The more cells, the finer the grid, the higher the dimensionality
of the resulting feature vector. It is set to 8. 4. Grid Y: the number of cells in the vertical
direction. The more cells, the finer the grid, the higher the dimensionality of the
resulting feature vector. It is set to 8.

Eq.4.3 Formula 1
This calculation is done by LBP algorithm for a face after the parameters are
obtained the first step is to convert the image to grayscale. Next is to obtain a window of
3x3 pixels for the image with intensity of each pixel denoted by any value from 0-255.
21
Chapter 4 Sever side design and development

A central value is then selected to be used as threshold value which will be used
to define the new values from 8 neighbours as shown in the figure above. If the intensity
of the centre pixel is greater than-or-equal to its neighbour, then we set the value to 1;
otherwise, the value is set to 0.

Fig.4.4 LBP algorithm

Eq.4.5 Formula 2
In the given formula, gc: intensity value of the central pixel gp: intensity of the
neighbouring pixel with index p The s(x) function which is shown above is known as
threshold function which will determine the binary values for the 8 neighbouring pixels.

Fig.4.6 Formula Flow Chart


Using the LBP combined with histograms we can represent the face images with
a simple data vector. The LBP requires 4 parameters namely Radius, Neighbours, Grid
X, Grid Y. 1. Radius: the radius is used to build the circular local binary pattern and
22
Chapter 4 Sever side design and development

represents the radius around the central pixel. 2. Neighbours: the number of sample
points to build the circular local binary pattern. The more sample points you include, the
higher the computational cost.
4.2 HISTOGRAM EXTRACTION
At the end of this LBP process, we have a new image which represents better the
characteristics of the original image. After this we extract the histograms of regions as
shown below:

Fig.4.7 Histogram extraction and concatenation


As we have an image in grayscale, each histogram will contain only 256 positions
(0~255) representing the occurrences of each pixel intensity. Then concatenation of each
histogram creates a new and bigger histogram. After this step histograms are created for
each image from the trained dataset.
Now at the actual recognition step the input image we get the corresponding
histogram for it by applying the steps mentioned above. The histograms can be calculated
using Euclidean distance, chi-square, etc. We can then use a threshold and the
‘confidence’ to automatically estimate if the algorithm has correctly recognized the image.
We can assume that the algorithm has successfully recognized if the confidence is lower
than the threshold defined. The Euclidean distance formula for calculating the distance
between histograms is given below:

Eq.4.8 Formula 3
 Hist1: Value of histogram which is created from trained dataset images.
 Hist2: Value of histogram which is obtained at the recognition step. D: Distance
between Hist1 and Hist2.

23
Chapter 4 Sever side design and development

4.3 OpenCV
OpenCV (Open source computer vision) is a library of programming functions
mainly aimed at real-time computer vision. Originally developed by Intel, it was later
supported by Willow Garage then Itseez (which was later acquired by Intel). The library
is cross-platform and free for use under the open-source BSD license.
OpenCV supports some models from deep learning frameworks like Tensor Flow,
Torch, PyTorch (after converting to an ONNX model) and Caffe according to a defined
list of supported layers.
History
Officially launched in 1999 the OpenCV project was initially an Intel Research
initiative to advance CPU-intensive applications, part of a series of projects including real-
time ray tracing and 3D display walls. The main contributors to the project included a
number of optimization experts in Intel Russia, as well as Intel's Performance Library
Team.
In the early days of OpenCV, the goals of the project were described as:
 Advance vision research by providing not only open but also optimized
code for basic vision infrastructure. No more reinventing the wheel.
 Disseminate vision knowledge by providing a common infrastructure
that developers could build on, so that code would be more readily
readable and transferable.
 Advance vision-based commercial applications by making portable,
performance-optimized code available for free – with a license that did
not require code to be open or free itself.
The first alpha version of OpenCV was released to the public at the IEEE
Conference on Computer Vision and Pattern Recognition in 2000, and five betas were
released between 2001 and 2005. The first 1.0 version was released in 2006. A version
1.1 "pre-release" was released in October 2008.
The second major release of the OpenCV was in October 2009. OpenCV 2
includes major changes to the C++ interface, aiming at easier, more type-safe patterns,
new functions, and better implementations for existing ones in terms of performance
(especially on multi-core systems). Official releases now occur every six months and
development is now done by an independent Russian team supported by commercial
corporations.

24
Chapter 4 Sever side design and development

In August 2012, support for OpenCV was taken over by a non-profit foundation
OpenCV.org, which maintains a developer and user site.
On May 2016, Intel signed an agreement to acquire Itseez, a leading developer of
OpenCV.
4.4 APPLICATIONS
Open Frameworks running the OpenCV add-on example:
OpenCV's application areas include:
 2D and 3D feature toolkits
 Facial recognition system
 Gesture recognition
 Human–computer interaction (HCI)
 Object identification
 Motion tracking
 Augmented reality

Fig.4.9 Blob Detection


To support some of the above areas, OpenCV includes a statistical machine learning
library that contains:
 Boosting
 Decision tree learning
 Gradient boosting trees
 k-nearest neighbour algorithm

25
Chapter 4 Sever side design and development

 Artificial neural networks


 Random forest
 Support vector machine (SVM)
 Deep neural networks (DNN)
4.5 PROGRAMMING LANGUAGE
OpenCV is written in C++ and its primary interface is in C++, but it still retains a
less comprehensive though extensive older C interface. There are bindings in Python,
Java and MATLAB/OCTAVE.
The API for these interfaces can be found in the online documentation Wrappers
in other languages such as C#, Perl, Ch, Haskell, and Ruby have been developed to
encourage adoption by a wider audience.
Since version 3.4, OpenCV.js is a JavaScript binding for selected subset of
OpenCV functions for the web platform. All of the new developments and algorithms in
OpenCV are now developed in the C++ interface.
4.6 HARDWARE ACCELERATION
• If the library finds Intel's Integrated Performance Primitives on the system, it will
use these proprietary optimized routines to accelerate itself.
• A CUDA-based GPU interface has been in progress since September 2010.
• An OpenCL-based GPU interface has been in progress since October 2012,
documentation for version 2.4.13.3 can be found at docs.opencv.org.
4.7 OS SUPPORT
OpenCV runs on the following desktop operating systems:
 Windows
 Linux
 macOS
 FreeBSD
 OpenBSD.
OpenCV runs on the following mobile operating systems:
 Android
 iOS
 BlackBerry 10.
The user can get official releases from Source Forge or take the latest sources from
GitHub. OpenCV uses CMake.
OpenCV runs on the following desktop operating systems,
26
Chapter 4 Sever side design and development

 Windows
 Linux
 macOS
 FreeBSD
 NetBSD
 And OpenBSD.
4.8 COMPARISION BETWEEN OPENCV AND MATLAB
 Open CV is a computer vision library which gives u functions to image and
video processing in C/C++.
 It is kind of simplified DirectShow with many image and video processing
function.
 You can build very powerful algorithms which will give you very good efficiency
as compared to vision tool box of mat lab.
 You can also build commercial applications with it because it has BSD licensing
so kind of open source.
 Coming to Mat lab, it is good and very easy but lack efficacy and speed.
 Since mat lab is a scripting language unlike Opencv which gets complied.
 It is difficult to make video conferencing or any other video processing efficient
since you can see any encoder is always built using c/c++ rather I would say c
because it is more close to assembly.
 But you will have little hard time learning opencv if you are a beginner in c/C++
MATLAB also can do Image Processing, then why OpenCV? Stated below are some
differences between both. Once you go through them, you can decide for yourself.
Advantages of OpenCV over MATLAB (Collected from various blogs/forums. See
references below)
Speed: Mat lab is built on Java, and Java is built upon C. So when you run a Mat lab
program, your computer is busy trying to interpret all that Mat lab code. Then it turns it
into Java, and then finally executes the code. OpenCV, on the other hand, is basically a
library of functions written in C/C++.
You are closer to directly provide machine language code to the computer to get
executed. So ultimately you get more image processing done for your computers
processing cycles, and not more interpreting.

27
Chapter 4 Sever side design and development

As a result of this, programs written in OpenCV run much faster than similar
programs written in Mat lab. So, conclusion? OpenCV is damn fast when it comes to
speed of execution.
For example, we might write a small program to detect people’s smiles in a
sequence of video frames. In Mat lab, we would typically get 3-4 frames analysed per
second. In OpenCV, we would get at least 30 frames per second, resulting in real-time
detection.
Resources needed: Due to the high level nature of Mat lab, it uses a lot of your
systems resources. And I mean A LOT! Mat lab code requires over a gig of RAM to run
through video. In comparison, typical OpenCV programs only require ~70mb of RAM to
run in real-time. The difference as you can easily see is HUGE!
Cost: List price for the base (no toolboxes) MATLAB (commercial, single user
License) is around USD 2150. OpenCV (BSD license).
Portability: MATLAB and OpenCV run equally well on Windows, Linux and
MacOS. However, when it comes to OpenCV, any device that can run C, can, in all
probability, run OpenCV.
Despite all these amazing features, OpenCV does lose out over MATLAB on some points.
Ease of use: Mat lab is a relatively easy language to get to grips with. Mat lab is
a pretty high-level scripting language, meaning that you don’t have to worry about
libraries, declaring variables, memory management or other lower-level programming
issues. As such, it can be very easy to throw together some code to prototype your image
processing idea. Say for example I want to read in an image from file and display it.
Memory Management: OpenCV is based on C. As such, every time you allocate
a chunk of memory you will have to release it again. If you have a loop in your code where
you allocate a chunk of memory in that loop and forget release it afterwards, you will get
what is called a “leak”. This is where the program will use a growing amount of memory
until it crashes from no remaining memory. Due to the high-level nature of Mat lab, it is
“smart” enough to automatically allocate and release memory in the background.
Development Environment: Mat lab comes with its own development
environment. For OpenCV, there is no particular IDE that you have to use. Instead, you
have a choice of any C programming IDE depending on whether you are using Windows,
Linux, or OS X. For Windows, Microsoft Visual Studio or NetBeans is the typical IDE used
for OpenCV. In Linux, its Eclipse or NetBeans, and in OSX, we use Apple’s Xcode.

28
Chapter 5 Result

CHAPTER 5
RESULT
IoT Based Attendance Monitoring System for student’s attendance is developed
for tracking the student’s daily base attendance. This project is developed & created by
Raspberrypi, RFID reader, RFID card, 5 MP raspberry camera, LCD display.
5.1 CAMERA & RFID READER
Raspberrypi is interfaced with RFID reader, camera, LCD display. The
connection if sensors with Raspberrypi is shown in Fig.5.1

Fig.5.1 Raspberrypi with Sensors

After initializing the hardware, we setup a camera & RFID reader apparatus for
demonstrating the working. This way, the LCD will show the Welcome Screen of the
System is shown in Fig.5.2

Fig.5.2 Welcome Display

29
Chapter 5 Result

The student face, RFID card number & name of the student must be register in
program. After that he/she place the RFID card over the RFID reader with their respective
card. And next it displays the welcome message of student name. Then raspberry camera
will turn on it will capture the student’s face. After that it will ask to wear a mask if you
want your attendance wear your mask or else don't wear mask. If you wear a mask then
the attendance will be marked and data will uploaded to thing speak & firebase. And one
more thing if the RFID card number isn’t register in the program in case unauthorized
user or student swaps the RFID card over RFID reader it will display wrong card and alert
message send it to admin.

Fig.5.3 Placing the RFID Card Fig.5.4 RFID Card is matched

Fig.5.5 Mask Verification Fig.5.6 Attendance Marked

Fig.5.7 Illegal card Access Fig.5.8 SMS Notification to Admin

30
Chapter 5 Result

5.2 CLOUD – Think Speak & Firebase


Raspberrypi sends the card id, name, face id and mask status data uploaded in
thing speak cloud and plotted it to graph as shown in Fig.5.9 & Fig.5.10.

Fig.5.9 RFID card id graph Fig.5.10 Face id graph

Same as well the card id, name, face id and mask status data uploaded in firebase
database as shown in Fig.5.11.

Fig.5.11 Firebase database

31
Chapter 5 Result

5.3 CUSTOM WEB PAGE


The Admin can also logged in the web page by the specific administration Login
Site as shown in Fig 5.12. A Web application is created to view the details of the card id,
name, face id and mask status of the Student’s etc.…. As shown in Fig 5.13.

Fig.5.12 Login Page

Fig.5.13 Dash Board Screen

32
Chapter 5 Result

This page shows the Attendance report lively, we can filter the attendance data by
entering a particular date on the option Menu and can see the Number of Students When
they are Present (Which was highlighted in Green Colour). Whether the Colour Red
Represents the Absent. We can also download the data in Excel Format. The Fig 5.14
shows the Student report.

Fig.5.14 Report Page

33
Conclusion

CONCLUSION
Basically this system work for improving attendance system in every domain like
schools, colleges, organizations, institutions and companies. Capturing live images from
camera and applying different techniques of face detection and face recognition which
will reduce manual or traditional work. In our solution, by creating interface we generate
the dataset. We trained the images using Haar Cascade and Adobos classifier. After
completing training it will successfully detect and recognize faces and non faces. When
stored images and compared images matched then attendance sheet get updated
automatically with time and date. As it stored the entering time of every student it
becomes easy for faculty member to keep track on time of student.

vii
Bibliography

BIBLIOGRAPHY
1. Gadekar, Dipak, SanyuktaGhorpade, VishakhaShelar, and Ajay Paithane. "IoT
BASED ATTENDANCE MONITORING SYSTEM USING FACE AND FINGERPRINT."
(2018).
2. Pasumarti, Priya, and P. Purna Sekhar. "Classroom Attendance Using Face
Detection and Raspberry-Pi." International Research Journal of Engineering and
Technology (IRJET) 5, no. 03 (2018): 167-171.
3. Bhattacharya, Shubhobrata, Gowtham Sandeep Nainala, Prosenjit Das, and
AurobindaRoutray. "Smart Attendance Monitoring System (SAMS): A Face Recognition
Based Attendance System for Classroom Environment." In 2018 IEEE
18th International Conference on Advanced Learning Technologies (ICALT), pp. 358-
360. IEEE, 2018
4. Soniya, V., R. Swetha Sri, K. SwethaTitty, R. Ramakrishnan, and S. Sivakumar.
"Attendance automation using face recognition biometric authentication." In Power and
Embedded Drive Control (ICPEDC), 2017 International Conference on, pp. 122-127.
IEEE, 2017.
5. Uma, K., S. Srilatha, D. Kushal, A. R. Pallavi, and V. Nanda Kumar. "Biometric
Attendance Prediction using Face Recognition Method." Indian Journal of Science and
Technology 10, no. 17 (2017).

viii

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy