0% found this document useful (0 votes)
16 views74 pages

1051791158741317

The document outlines a project focused on developing an Automatic Toll Collection System using RFID and vehicle classification through Convolutional Neural Networks. It highlights the importance of accurate vehicle classification for traffic management and toll collection, proposing a novel context-aware method that significantly reduces the required labeled dataset while maintaining high accuracy. The methodology includes various stages such as vehicle detection using Haar cascade algorithm, adaptive tracking, and data association, along with a comprehensive literature survey on existing classification methods.

Uploaded by

sidharsh003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views74 pages

1051791158741317

The document outlines a project focused on developing an Automatic Toll Collection System using RFID and vehicle classification through Convolutional Neural Networks. It highlights the importance of accurate vehicle classification for traffic management and toll collection, proposing a novel context-aware method that significantly reduces the required labeled dataset while maintaining high accuracy. The methodology includes various stages such as vehicle detection using Haar cascade algorithm, adaptive tracking, and data association, along with a comprehensive literature survey on existing classification methods.

Uploaded by

sidharsh003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

TABLE OF CONTENTS

CHAPTER TITLE PAGE NO


ABSTRACT iv
LIST OF FIGURES v-vi
LIST OF ABBREVIATIONS Vi

1 INRODUCTION 1
1.1OUTLINE OF THE PROJECT 2
1.2 MOTIVATION 2
1.3 PROBLEM STATEMENT 2

2 LITERATURE SURVEY 3-5

3 METHODOLOGY 6
3.1 AIM 6
3.2 SCOPE 6
3.2.1 DRAWBACKS OF EXISTING
SYSTEM 6
3.3 HARDWARE REQUIRMENTS 6
3.4 SOFTWARE REQUIREMENTS 7
3.5 STEM DESIGN 8
3.6 MODULE IMPLEMENTATION 9
3.6.1 DATA PRE-PROCESSING 9
3.6.2 FEATURE EXTRACTION 9
3.6.3 SLIDING WINDOWS 9
3.7 HAAR CASCADE ALGORITHM 10
3.7.1 HAAR FEATURE SELECTION 10
3.7.2 CREATING INTEGRAL IMAGES 11
3.7.3 ADABOOST TRAINING 11
3.7.4 CASCADING CLASSIFIERS 11
3.8 APPLICATIONS 12

i
3.8.1 VIRTUAL PERSONAL 1
ASSISTANCE
3.8.2 TRAFFIC PREDICTIONS 13

3.8.3 VIDEO SURVELLIANCE 13


3.9 PROPOSED SYSTEM 13
4 DESIGN METHODOLOGY 14-18
4.1 Hardware Description 18
4.1.1 Node MCU 19
4.1.2 Schematic & Reference Design 20
4.1.3 Specifications 21
4.1.4 Power 21
4.1.5 Memory 22
4.1.6 Input and Output 22
4.1.7 Communication 23
4.1.8 Programming 24
4.1.9 Automatic (Software) Reset 24
4.2 RFID Reader EM-18 25-28
4.3 Servo Motor 29-33
4.4 Light Emitting Diode(Led) 34-36
4.5 Piezo-Buzzer 37-39
5 SOFTWARE DESCRIPTION 40
5.1 creating project in arduino 1.7.11 version 40
5.1.1 Arduino Ide Installation 40-44
5.1.2 Anaconda IDE Installation for windows 45-49
5.1.3 Computer Vision Vs Image Processing 50-54
5.2 Python Software 54
5.2.1 Writing A Python program 55-56
5.3 Invoking The Interpreter 57-60

6 RESULTS AND DISCUSSION 61-62

7 CONCLUSION AND FUTURE SCOPE 63


Advantages And Applications 64

ii
REFERENCES 65
APPENDICES 66
A. SOURCE CODE 66-67

B. PUBLICATION WITH REPORT 68-73

iii
ABSTRACT

Toll Vehicle Classification is an important task. Indeed, it has many uses in traffic
management and toll collection systems. In this paper, Vinci Auto-roots group Networks
(the biggest French Highways concession) are considered, where every year, millions of
vehicles are classified in real time. Then, a small decrease in classification performance
can have serious economic losses. Therefore, the accuracy and the time complexity
become critical for the toll collection system. The current classification algorithm uses the
scene features to detect vehicles classes. However, it requires a large labelled dataset, and
has a limitation when multiple vehicles are in the scene. Here in, we propose a novel
context-aware vehicle classification method that takes profit from the semantic spatial
relationship of the objects. The experiments show that our method is performing as
accurately as the existing model with significantly lower labelled datasets (74 times
smaller). Moreover, the obtained accuracy of the proposed method is 99.97% compared to
99.79% achieved by the current method when using the same training set. We apply Haar
Cascade algorithm to detect the vehicle classification.

iv
LIST OF FIGURES

FIGURE NO NAME OF THE FIGURE PAGE NO

3.1 BLOCK DIAGRAM: STAGE 1 7


3.2 PROPOSED ARCHITECTURE 8
3.3 HAAR FEATURE 10
3.4 CASCADE CLASSIFIER 11
3.5 TRAIN CASCADE OBJECT DETECTOR 12
4.1 SHOWS VGG 16 SSD MODEL 14
4.1.1 NORMAL CONVOLUTION 15
4.1.2 DEPTH WISE CONVOLUTION FILTERS 15
4.1.3 1×1 CONVOLUTIONAL FILTERS 15
4.1.4 DETECTION OF HUMAN FROM BACKGROUND 16
SUBTRACTION
4.1.5 TRACKING IN A SEQUENCE OF DETECTION 17
4.2 COMBINED BLOCK DIAGRAM OF STAGE 1&2 18
4.2.1 NODE MCU 19
4.2.2 SCHEMATIC & REFERENCE DESIGN 20
4.2.3 RFID – SYSTEM PRINCIPLE 26
4.2.4 PASSIVE RFID TAG 26
4.2.5 CIRCUIT DIAGRAM OF EM-18 RFID READER 27
MODULE
4.2.6 INTERFACING EM-18 RFID READER MODULE 28
WITH NODE MCU CIRCUIT DIAGRAM
BREADBOARD WIRING.
4.2.7 INTERFACING EM-18 RFID READER WITH 28
NODE MCU – ON BOARD
4.3 SERVO MOTOR 29
4.3.1 VARIABLE PULSE WIDTH CONTROL SERVO 33
MOTOR
4.4 LIGHT-EMITTING DIODE (LED) 35
4.5 PIEZO BUZZER 37
4.5.1 STRUCTURE OF PIEZO BUZZER 38
4.5.2 THEORY OF PIEZO BUZZER 38
5.1.1.1 DOWNLOAD ARDUINO IDE SOFTWARE 40
5.1.1.2 LAUNCH ARDUINO IDE. 41
5.1.1.3 OPEN OUR FIRST PROJECT 41
5.1.1.4 SELECT OUR ARDUINO BOARD 42
5.1.1.5 SELECT SERIAL PORT 43
5.1.1.6 UPLOAD THE PROGRAM TO THE BOARD 44
5.1.2.1 DOWNLOAD PAGE 45
5.1.2.2 ANACONDA INSTALLERS 45
5.1.2.3 ANACONDA SETUP 46
v
5.1.2.4 LICENSE AGREEMENT 46
5.1.2.5 INSTALLATION TYPE 47
5.1.2.6 DESTINATION FOLDER 47
5.1.2.7 INSTALL 48
5.1.2.8 ANACONDA +JETBRAINS 48
5.1.2.9 COMPLETING ANACONDA SETUP 49
5.2 WRITING A PYTHON PROGRAM 55
5.2.1 LAUNCHING IDLE FROM THE WINDOWS 56
START MENU
5.2.2 THE IDLE INTERPRETER WINDOW 57
5.3 INVOKING THE INTERPRETER 60
6.1 DETECTION OF CAR 62
6.2 DETECTION OF BUS 62

vi
LIST OF ABBREVIATION

S.NO ABBREVIATION EXPANSION

1 AI ARTIFICIAL INTELLIGENCE
2 GPS GLOBAL POSITIONING SYSTEM
3 UML UNIFIED MODELLING LANGUAGE

vi
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-1
INTRODUCTION

India is a nation where we get the chance to watch most broad National parkways. The
private office required in the assembling of the foundation is allowed to charge residents. The
conditions of clog and wastefulness incited government to plan and actualize Electronic Toll
Collection (ETC) framework which can expel out these issues and encourage accommodation
for all who required during the time spent toll gathering straightforwardly or in a roundabout
way. Vehicle discovery, following, order and checking is vital for military, non military
personnel and government applications, for example, parkway observing, movement arranging,
toll gathering and activity stream. For the activity administration, vehicles discovery is the
basic stride. Some of these incorporate Manual toll accumulation, RF labels, Barcodes, Number
plate acknowledgment. Every one of these frameworks have disservices that prompt a few
blunders in the relating framework. The proposed framework plans to outline and build up
another proficient toll gathering framework which will be a decent minimal effort elective
among every single other framework. PC Vision based procedures are more appropriate in light
of the fact that these frameworks don't aggravate movement while establishment and they are
anything but difficult to change. A camera catches pictures of vehicles going through toll
corner subsequently a vehicle is recognized through camera. Contingent upon the territory
involved by the vehicle, arrangement of vehicles as light and substantial is finished.
India is a nation where we get the opportunity to watch most broad national interstates.
Government designs different stages to finish the undertakings under development. The private
organization required in the assembling of the foundation is allowed to charge subjects. PC
vision is an imperative field of counterfeit consciousness where this choice about true scene
having high dimensional information istaken. Many highway toll collection system have
already been developed and are widely used in India. Some of these include Manual toll
collection, RF tags, Barcodes, Number plate recognition. To capture the number plate image,
image processing is required. This can be done using Open CV technology.
The numerical or typical data of a scene is chosen in light of the suitable model
developed with the help of protest geometry, material science, measurement, and learning
hypothesis. The scene under thought is changed over into the image(s) or the video(s),
involving many pictures,

DEPT OF ECE TKREC 1


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

utilizing camera(s) concentrated from various areas on a scene. The different vision related
ranges, for example, seen recreation, occasion discovery, video following, question
acknowledgment, protest posture estimation and picture reclamation are considered as subareas
of PC vision. Additionally, different fields, for example, picture handling, picture investigation
and machine vision are likewise firmly identified with PC vision. The systems and uses of
different above said zones cover with each other. The picture substance are not translated in
picture preparing while in PC vision the elucidation of pictures is made in view of the
properties of the substance they contain. PC vision may incorporate examination of 3D pictures
from 2D.

1.1 OUTLINE OF THE PROJECT

Transportation nowadays is a primary need for every person in finding to most suitable
daily transportation. However, there is an existing huge problem. The uncontrolled personal
vehicle growth has become one of serious transportation problems. According to the
previously conducted research by Indonesia Ministry of Transportation, Indonesian vehicle
growth exhibits surprising results, 12% for motorcycle, 8.89% for car, and 2.2% for bus. The
vehicle detection is essential in intelligent systems that aims to detect potentially dangerous
situations with vehicles in advance to warn the driver.

1.2 MOTIVATION

Classification and detection of objects have been the state-of-art approach for many areas
in computer vision. In the domain of video surveillance classification of objects have been a
major breakthrough. The Haar classifier is able to detect vehicles and showed that the vehicle
detection performance was greatly improved with higher accuracy and robustness.

1.3 PROBLEM STATEMENT

This paper presents a real-time vision framework that detects and tracks vehicles. The
framework consists of three main stages. Vehicles are first detected using Haar cascade
algorithm. In the second phase, an adaptive appearance-based model is built to dynamically
keep track of the detected vehicles and the third phase of data association to fuse the detection
and tracking results.

DEPT OF ECE TKREC 2


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-2
LITERATURE SURVEY

1. A. Hasnat, N. Shvai, A. Meicler, P. Maarek, A. Nakib, "New vehicle classification


method based on hybrid classifiers" in IEEE Int. Conf on Image Processing, IEEE, pp.
3084-3088, 2018.
In this work, a real-world problem of the vehicle type classification for automatic toll
collection is considered. This problem is very challenging because any loss of accuracy even of
the order of 1% quickly turns into a significant economic loss. To deal with such problem,
many companies currently use Optical Sensors (OS) and human observers to correct the
classification errors. Herein, a novel vehicle classification method is proposed. It consists in
regularizing the problem using one camera to obtain vehicle class probabilities using a set of
Convolutional Neural Network (CNNs) models, followed by the Gradient Boosting based
classifier to fuse the continuous class probabilities with the discrete class labels obtained from
optical sensors. Results show that our method performs significantly better than the existing
automatic toll collection system and, hence will vastly reduce the workload of human
operators.

2. N. Shvai, A. Meicler, A. Hasnat, E. Machover, P. Maarek, S. Loquet, A. Nakib,


"Optimal ensemble classifiers-based classification for automatic vehicle type
recognition", 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2018.
In this work, a challenging vehicle type classification problem for automatic toll
collection task is considered, which is currently accomplished with an Optical Sensors (OS) and
corrected manually. Indeed, the human operators are engaged to manually correct the OS
misclassified vehicles by observing the images obtained from the camera. In this paper, we
propose a novel vehicle classification algorithm, which first uses the camera images to obtain
the vehicle class probabilities using several Convolutional Neural Networks (CNNs) models
and then uses the Gradient Boosting based classifier to fuse the continuous class probabilities
with the discrete class labels obtained from two optical sensors. We train and evaluate our
method using a challenging dataset collected from the cameras of the toll collection points.
Results show that our method performs significantly (98.22% compared to 75.11%) better than
the existing automatic toll collection system.

DEPT OF ECE TKREC 3


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3. J. Fang, Y. Zhou, Y. Yu, S. Du, "Fine-grained vehicle model recognition using a


coarse-to-fine convolutional neural network architecture", IEEE Trans. Intell. Transp.
Syst., vol. 18, no. 7, pp. 1782-1792, 2017.

Fine-grained vehicle model recognition is a challenging problem in intelligent


transportation systems due to the subtle intra-category appearance variation. In this paper, we
demonstrate that this problem can be addressed by locating discriminative parts, where the most
significant appearance variation appears, based on the large-scale training set. We also propose
a corresponding coarse-to-fine method to achieve this, in which these discriminative regions are
detected automatically based on feature maps extracted by convolutional neural network. A
mapping from feature maps to the input image is established to locate the regions, and these
regions are repeatedly refined until there are no more qualified ones. The global and local
features are then extracted from the whole vehicle images and the detected regions, respectively
The experimental results show that our framework outperforms most of the state-of-the-art
approaches, achieving 98.29% accuracy over 281 vehicle makes and models.

4. L. Jiang, J. Li, L. Zhuo, Z. Zhu, "Robust vehicle classification based on the


combination of deep features and handcrafted features" in Trustcom/BigDataSE/ICESS
2017 IEEE, IEEE, pp. 859-865, 2017.

Vehicle classification plays an important part in Intelligent Transport System. Recently,


deep learning has showed outstanding performance in image classification. However, numerous
parameters of the deep network need to be
optimized which is time-consuming. PCANet is a light-weight deep learning network that is
easy to train. In this paper, a new robust vehicle classification method is proposed, in which the
deep features of PCANet, handcrafted features of HOG (Histogram of Oriented Gradient) and
HU moments are extracted to describe the content property of vehicles. The vehicles are
classified into six categories, i.e. large bus, car, motorcycle, minibus, truck and van. We
construct a Vehicle Dataset including 13700 vehicle images extracted from real surveillance
videos to carry out the experiments. The average classification accuracy can achieve 98.34%,
which is 4.49% higher than that obtained from the conventional methods based on "Feature +
Classifier" and is also slightly higher than that from Google Net (98.26%).

DEPT OF ECE TKREC 4


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

The proposed method doesn't need GPU and has much greater convenience than Google Net.
The experimental results have demonstrated that for a specific task, the combination of the
deep features obtained from light-weight deep learning network and the handcrafted features
can achieve comparable or even higher performance compared to the deeper neural network.

5. M. Biglari, A. Soleimani, H. Hassanpour, "A cascaded part-based system for fine-


grained vehicle classification", IEEE Trans. Intell. Transp. Syst., vol. 19, no. 1, pp. 273-
283, 2018.

Vehicle make and model recognition (VMMR) has become an important part of intelligent
transportation systems. VMMR can be useful when license plate recognition is not feasible or
fake number plates are used. VMMR is a hard, fine-grained classification problem, due to the
large number of classes, substantial inner-class, and small inter-class distance. A novel
cascaded part-based system has been proposed in this paper for VMMR. This system uses
latent support vector machine formulation for automatically finding the discriminative parts of
each vehicle category. At the same time, it learns a part-based model for each category. Our
approach employs a new training procedure, a novel greedy parts localization, and a practical
multi-class data mining algorithm. In order to speed up the system processing time, a novel
cascading scheme has been proposed. This cascading scheme applies confidence and
frequency. classifiers to the input image in a sequential manner, based on the two proposed
criteria: The cascaded system can run up to 80% faster with analogous accuracy in comparison
with the non-cascaded system. The extensive experiments on our data set and the Comp Cars
data set indicate the outstanding performance of our approach. The proposed approach
achieves an average accuracy of 97.01% on our challenging data set and an average accuracy
of 95.55% on Comp Cars data set.

DEPT OF ECE TKREC 5


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-3
METHODOLOGY
3.1 AIM
There are two toll collection system are existing: All vehicle has to stop at toll plaza
along the highway to pay the toll, one person collects the money and issue a receipt, after
which gate is opened either mechanically or electronically for the driver to get through the
toll plaza. Another is smart card system in which driver show a smart card to access the data
to the system installed on toll plaza to pass.

3.2 SCOPE

Detection and classification of the vehicles and other objects of interest (e.g., toll
payment box). Predicting the scene class based on the spatial relationships among the
vehicles of interest and contextually important objects. This uses machine learning
techniques to get a high degree of accuracy from what is called “training data”. Haar
Cascades use the Adaboost learning algorithm which selects a small number of important
features from a large set to give an efficient result of classifiers.

3.2.1 Drawback of Existing System

The above system for collecting toll tax is time consuming method; there is long
queue of vehicle at toll plaza chances of escaping the payment of toll tax.

3.3 HARDWARE REQUIREMENTS

The most common set of requirements defined by any operating system or software
application is the physical computer resources, also known as hardware. A hardware
requirements list is often accompanied by a hardware compatibility list, especially in case of
operating systems. The minimal hardware requirements are as follows,

1. Processor: Pentium IV

2. RAM: 8 GB

DEPT OF ECE TKREC 6


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3. Processor: 2.4 GHZ

4. Main Memory: 8GB RAM

5. Processing Speed: 600 MHZ

6. Hard Disk Drive: 1TB

7. Keyboard: 104Keys

3.4 SOFTWARE REQUIREMENTS

Software requirements deals with defining resource requirements and prerequisites that
needs to be installed on a computer to provide functioning of an application. These
requirements are need to be installed separately before the software is installed. The
minimal software requirements are as follows,

1. Front End: Python


2. IDE: Anaconda
3. OS: Windows 10

Block Diagram: Stage 1:

Fig. 3.1: Block Diagram: Stage 1

DEPT OF ECE TKREC 7


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3.5 STEM DESIGN

Architecture

Training Data Analysis

Feature Extraction

Model Training

Sliding Windows

Results

Fig. 3.2: Proposed Architecture

DEPT OF ECE TKREC 8


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3.6 MODULE IMPLEMENTATION

3.6.1 Data Pre-processing

It is a technique that is used to convert the raw data into a clean data set. In other words,
whenever the data is gathered from different sources it is collected in raw format which is
not feasible for the analysis.

3.6.2 Feature extraction

It is the process of transforming the raw pixel values from an image, to a more
meaningful and useful information that can be used in other techniques, such as point
matching or machine learning.

3.6.3 Sliding Windows

The technique can be best understood with the window pane in bus, consider a window
of length n and the pane which is fixed in it of length k. Consider, initially the pane is at
extreme left i.e., at 0 units from the left. Now, co-relate the window with array array [] of
size n and pane with current sum of size k elements. Now, if we apply force on the window
such that it moves a unit distance ahead. The pane will cover next k consecutive elements.
Consider an array array [] = {5, 2, -1, 0, 3} and value of k = 3 and n = 5
Applying sliding window technique:

3.6.3.1 We compute the sum of first k elements out of n terms using a linear loop and store the
sum in variable window sum.

3.6.3.2 Then we will graze linearly over the array till it reaches the end and simultaneously
keep track of maximum sum.

3.6.3.3 To get the current sum of block of k elements just subtract the first element from the
previous block and add the last element of the current block. The below representation will make it
clear how the window slides over the array. This is the initial phase where we have calculated the
initial window sum starting from index 0 . At this stage the window sum is 6. Now, we set the maximum
sum as current window 8 i.e.
DEPT OF ECE TKREC 9
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3.7 HAAR CASCADE ALGORITHM

Haar Cascade is a machine learning object detection algorithm used to identify objects
in an image or video and based on the concept of features proposed by Paul Viola and
Michael Jones in their paper "Rapid Object Detection using a Boosted Cascade of Simple
Features" in 2001. It is a machine learning based approach where a cascade function is
trained from a lot of positive and negative images. It is then used to detect objects in other
images.
The algorithm has four stages:
1. Haar Feature Selection
2. Creating Integral Images
3. Ad boost Training
4. Cascading Classifiers

Let’s take face detection as an example. Initially, the algorithm needs a lot of positive
images of faces and negative images without faces to train the classifier. Then we need to
extract features from it.

3.7.1 Haar Feature Selection

First step is to collect the Haar Features. A Haar feature considers adjacent rectangular
regions at a specific location in a detection window, sums up the pixel intensities in each
region and calculates the difference between these sums.

Fig. 3.3: Haar Feature

DEPT OF ECE TKREC 10


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

3.7.2 Creating Integral Images

Integral Images are used to make this super-fast. But among all these features we
calculated, most of them are irrelevant. For example, consider the image below. Top row
shows two good features. The first feature selected seems to focus on the property that the
region of the eyes is often darker than the region of the nose and cheeks.
3.7.3 Ada boost Training

So how do we select the best features out of 160000+ features? This is accomplished
using a concept called Adaboost which both selects the best features and trains the
classifiers that use them. This algorithm constructs a “strong” classifier as a linear
combination of weighted simple “weak” classifiers. The process is as follows.

During the detection phase, a window of the target size is moved over the input image,
and for each subsection of the image and Haar features are calculated. You can see this in
action in the video below. Because each Haar feature is only a "weak classifier" (its
detection quality is slightly better than random guessing) a large number of Haar features
are necessary to describe an object with sufficient accuracy and are therefore organized into
cascade classifiers to form a strong classifier.

3.7.4 Cascading Classifiers

Fig. 3.4: Cascade Classifier

The cascade classifier consists of a collection of stages, where each stage is an


ensemble of weak learners. The weak learners are simple classifiers called decision stumps.
Each stage is trained using a technique called boosting. Boosting provides the ability to train

DEPT OF ECE TKREC 11


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

a highly accurate classifier by taking a weighted average of the decisions made by the weak
learners.

Each stage of the classifier labels the region defined by the current location of the
sliding window as either positive or negative. Positive indicates that an object was found
and negative indicates no objects were found. If the label is negative, the classification of
this region is complete, and the detector slides the window to the next location.

Fig. 3.5: Train cascade object detector


3.8 APPLICATIONS

3.8.1 Virtual Personal Assistants

Siri, Alexa, Google Now are some of the popular examples of virtual personal
assistants. As the name suggests, they assist in finding information, when asked over voice.
All you need to do is activate them and ask “What is my schedule for today?”, “What are
the flights from Germany to London”, or similar questions. For answering, your personal
assistant looks out for the information, recalls your related queries, or send a command to
other resources (like phone apps) to collect info. You can even instruct assistants for certain
tasks like “Set an alarm for 6 AM next morning”, “Remind me to visit Visa Office day after

DEPT OF ECE TKREC 12


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

tomorrow”. Machine learning is an important part of these personal assistants as they collect
and refine the information on the basis of your previous involvement with them. Virtual
Assistants are integrated to a variety of platforms. For example: Smart Speakers: Amazon
Echo and Google Home Smartphones: Samsung Bixby on Samsung S8 Mobile Apps:
Google All

3.8.2 Traffic Predictions

We all have been using GPS navigation services. While we do that, our current
locations and velocities are being saved at a central server for managing traffic. This data is
then used to build a map of current traffic. While this helps in preventing the traffic and
does congestion analysis, the underlying problem is that there are a smaller number of cars
that are equipped with GPS.. When sharing these services, how do they minimize the
detours? The answer is machine learning. Jeff Schneider, the engineering lead at Uber ATC
reveals in an interview that they use ML to define price surge hours by predicting the rider
demand. In the entire cycle of the services, ML is playing a major role.

3.8.3 Videos Surveillance

Imagine a single person monitoring multiple video cameras! Certainly, a difficult job to
do and boring as well. This is why the idea of training computers to do this job makes sense.
The video surveillance system nowadays is powered by AI that makes it possible to detect crime
before they happen. They track unusual behavior of people like standing motionless for a long
time, stumbling, or napping on benches etc. The system can thus give an alert to human
attendants, which can ultimately help to avoid mishaps.

3.9 PROPOSED SYSTEM

1. Detection and classification of the vehicles and other objects of interest (e.g., toll
payment box).

2. Predicting the scene class based on the spatial relationships among the vehicles of
interest and contextually important objects.

DEPT OF ECE TKREC 13


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-4
DESIGN METHODOLOGY

A. Single Shot Detector (SSD) algorithm

SSD is a popular object detection algorithm that was developed in Google Inc.. It is based
on the VGG-16 architecture. Hence SSD is simple and easier to implement.

Fig. 4.1: shows VGG 16 SSD model.

A set of default boxes is made to pass over several feature maps in a convolutional
manner. If an object detected is one among the object classifiers during prediction, then a
score is generated. The object shape is adjusted to match the localization box. For each
box, shape offsets and confidence level are predicted. During training, default boxes are
matched to the ground truth boxes. The fully connected layers are discarded by SSD
architecture.

Confidence is a measure of in which manner confidence the system is that a


predicted object is the actual object. Elimination of feature resampling and encapsulation
of all computation in a single network by SSD makes it simple to train with MobileNets.
Compared to YOLO, SSD is faster and a method it performs explicit region proposals and
pooling (including Faster R-CNN).

DEPT OF ECE TKREC 14


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

B. Mobile Nets algorithm


Mobile Nets uses depth wise separable convolutions that helps in building deep neural
networks. The Mobile Nets model is more appropriate for portable and embedded vision
based applications where there is absence of process control. The main objective of Mobile
Nets is to optimize the latency while building small neural nets at the same time. It
concentrates just on size without much focus on speed. Mobile Nets are constructed from
depth wise separable convolutions. In the normal convolution, the input feature map is
fragmented into multiple feature maps after the convolution.

Fig. 4.1.1: Normal Convolution

Fig. 4.1.2: Depth wise Convolution Filters

Fig. 4.1.3: 1×1 Convolutional Filters

The number of parameters is reduced significantly by this model through the use of
depth wise separable convolutions, when compared to that done by the network with normal
convolutions having the same depth in the networks. The reduction of parameters results in
the formation of light weight neural network.

DEPT OF ECE TKREC 15


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

A. Object detection Frame differencing


Frames are captured from camera at regular intervals of time. Difference is estimated
from the consecutive frames. Optical Flow

This technique estimates and calculates the optical flow field with algorithm used for
optical flow. A local mean algorithm is used then to enhance it. To filter noise a self-
adaptive algorithm takes place. It contains a wide adaptation to the number and size of the
objects and helpful in avoiding time consuming and complicated preprocessing methods.
Background Subtraction
Background subtraction (BS) method is a rapid method of localizing objects in
motion from a video captured by a stationary camera. This forms the primary step of a multi-
stage vision system. This type of process separates out background from the foreground
object in sequence in images

Fig. 4.1.4: Detection of human from background subtraction


Fig. 6 depicts Detection of human from background subtraction. Foreground or person is
detected and separated from the background of the image for further preprocessing. The
separation effect is shown step wise, after which localization of region of interest takes
place.
B. Object tracking
It is done in video sequences like security cameras and CCTV surveillance feed; the
objective is to track the path followed, speed of an object. The rate of real time detection can
be increased by employing object tracking and running classification in few frames captured
in a fixed interval of time. Object detection can run on a slow frame rates looking for
objects to lock onto and once those objects are detected and locked, then object tracking,
can run in faster frame speed.

DEPT OF ECE TKREC 16


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig. 4.1.5: Tracking in a sequence of detection

Two ways in which the object can be tracked in the above example are: (1)-Tracking in a
sequence of detection. In this method a CCTV video sequence of a traffic which is in motion
takes place. Suppose someone wants to track a car or person’s movement here, he will take
different images or frames at different interval of time. With the help of these images one can
target the object like a car or person. Then, by checking how my object has moved in different
frames of the video, I can track it. Velocity of the object can be calculated by verifying the
object’s displacement with the help of different frames taken at different interval of time. This
method is actually a flaw where one is not tracking but detecting the object at different intervals
of time. Improved method is “detection with dynamics”. In this method estimation of car’s
trajectory or movement takes place. By checking it’s position at a particular time ‘t’ and
estimating its position at another time interval let’s say ‘t+10’.From this actual image of car at
‘t+10’ time can be proposed with the help of estimation.

BLOCK DIAGRAM

PyCharm/NumPy
Web Camera System/Laptop and Open CV

Data

DEPT OF ECE TKREC 17


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Since Alex Net has stormed the research world in 2012 ImageNet on a large scale
visual recognition challenge, for detection in-depth learning, far exceeding the most traditional
methods of artificial vision used in literature. In artificial vision, the neural convolution
networks are distinguished in the classification of images. Fig. 1. Basic block diagram of
detection and Tracking Fig. 1 shows the basic block diagram of detection and tracking. In this
paper, an SSD and Mobile Nets based algorithms are implemented for detection and tracking in
python environment. Object detection involves detecting region of interest of object from given
class of image. Different methods are –Frame differencing, Optical flow, Background
subtraction. This is a method of detecting and locating an object which is in motion with the
help of a camera. Detection and tracking algorithms are described by extracting the features of
image and video for security applications [3] [7] [8].

4.1 Hardware Description

Fig. 4.2: COMBINED BLOCK DIAGRAM OF STAGE 1&2

DEPT OF ECE TKREC 18


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.1.1 NODE MCU


Overview
Node MCU is an open-source development board and firmware based on the ESP8266
Wi-Fi module, designed specifically for building IoT (Internet of Things) applications. It
combines the features of the ESP8266 microcontroller with a USB interface and voltage
regulator, making it easy to program and power using a micro USB cable. The board comes
with built-in Wi-Fi capabilities, allowing it to connect to wireless networks and transmit or
receive data over the internet. Node MCU supports various communication protocols such as
UART, I2C, SPI, and PWM, and offers several GPIO (General Purpose Input/Output) pins for
interfacing with sensors, actuators, and other modules. It can be programmed using the Arduino
IDE, Lua scripting language, Micro Python, or the Espressif SDK, offering flexibility for both
beginners and advanced users. One of its key advantages is the ability to quickly prototype IoT
projects due to its compact size, low power consumption, and strong community support.
Despite having limited GPIOs and only one ADC pin, it remains a popular choice for
applications like home automation, smart agriculture, weather monitoring, and wireless
sensor networks.
This makes it highly cost-effective and compact. Another standout feature is its ease of
programming; it supports multiple programming environments like the Arduino IDE, Lua, and
MicroPython, enabling both beginners and professionals to develop applications quickly. The
board also includes a USB-to-serial converter and voltage regulator, simplifying the connection
to a computer for uploading code and powering the device. With its GPIO pins, support for
various communication protocols, and strong community support, Node MCU offers a powerful
yet accessible solution for rapid IoT development.

Fig 4.2.1 : Node MCU

DEPT OF ECE TKREC 19


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.1.2 Schematic & Reference Design

Fig. 4.2.2: Schematic & Reference Design

The Arduino reference design can use an Atmega8, 168, or 328, Current models use an
ATmega328, but an Atmega8 is shown in the schematic for reference. The pin configuration is
identical on all three processors.

DEPT OF ECE TKREC 20


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.1.3 Specifications

Microcontroller ESP8266

Operating Voltage 5V

Input Voltage 7-12V (recommended)

Input Voltage (limits) 6-20V

Digital I/O Pins 11 (of which 6 provide PWM output)

Analog Input Pins 1

DC Current per I/O Pin 40 mA


DC Current for 3.3V Pin 50 mA
Flash Memory 4MB (ESP8266) of which 0.5 KB used by
bootloader
SRAM 64 KB (ESP8266)

EEPROM 1 KB (ESP8266)

Clock Speed 80 MHz

4.1.4 Power

The Arduino Uno can be powered via the USB connection or with an external power
supply. The power source is selected automatically.
External (non-USB) power can come either from an AC-to-DC adapter (wall- wart) or
battery. The adapter can be connected by plugging a 2.1mm center-positive plug into the
board's power jack. Leads from a battery can be inserted in the Gnd and Vin pin headers of the
POWER connector. The board can operate on an external supply of 6 to 20 volts. If supplied
with less than 7V, however, the 5V pin may supply less than five volts and the board may be
unstable. If using more than 12V, the voltage regulator may overheat and damage the board.
The recommended range is 7to 12 volts. The power pins are as follows VIN.

DEPT OF ECE TKREC 21


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

5V.This pin outputs a regulated 5V from the regulator on the board. The board can be supplied
with power either from the DC power jack (7 - 12V), the USB connector (5V), or the VIN pin
of the board (7-12V). Supplying voltage via the 5V or 3.3V pins bypasses the regulator, and can
damage your board. We don't advise it.
3V3. A 3.3 volt supply generated by the on-board regulator. Maximum current drawis 50 mA.
GND. Ground pins.

IOREF. This pin on the Arduino board provides the voltage reference with which the
microcontroller operates. A properly configured shield can read the IOREF pin voltage and
select the appropriate power source or enable voltage translators on the outputs for working
with the 5V or 3.3V.

4.1.5 Memory

The ESP8266 has 64 KB (with 0.5 KB used for the boot loader). It also has 2 KB of
SRAM and 1 KB of EEPROM (which can be read and written with the EEPROM library).

4.1.6 Input and Output

Each of the 11 digital pins on the Uno can be used as an input or output, using pin
Mode(), digital Write(), and digital Read() functions. They operate at 5 volts. Each pin can
provide or receive a maximum of 40 mA and has an internal pull-up resistor (disconnected by
default) of 20-50 kOhms. In addition, some pins havespecialized functions:

Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data. These
pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL Serial chip.

External Interrupts: 2 and 3. These pins can be configured to trigger an interrupt on a low
value, a rising or falling edge, or a change in value. See the attach Interrupt() function for
details.

DEPT OF ECE TKREC 22


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

PWM: 3, 5, 6, 9, 10, and 11. Provide 8-bit PWM output with the analog Write()function.
SPI: 10 (SS), 11 (MOSI), 12 (MISO), 13 (SCK). These pins support SPI

communication using the SPI library.

LED: 13. There is a built-in LED connected to digital pin 13. When the pin is HIGHvalue, the
LED is on, when the pin is LOW, it’s off.

The Uno has 6 analog inputs, labeled A0 through A5, each of which provide 10 bits of
resolution (i.e. 1024 different values). By default they measure from ground to 5 volts, though is
it possible to change the upper end of their range using the AREF pin and the analog
Reference() function. Additionally, some pins have specialized functionality:
TWI: A4 or SDA pin and A5 or SCL pin. Support TWI communication using theWire
library.
There are a couple of other pins on the board:

AREF. Reference voltage for the analog inputs. Used with analog Reference().

Reset. Bring this line LOW to reset the microcontroller. Typically used to add a resetbutton to
shields which block the one on the board.

4.1.7 Communication

The Arduino nano has a number of facilities for communicating with a computer,
another Arduino, or other microcontrollers. The ATmega328 provides UART TTL (5V) serial
communication, which is available on digital pins 0 (RX) and 1 (TX). An Atmega16U2 on the
board channels this serial communication over USB and appears as a virtual com port to
software on the computer. The ‘16U2 firmware uses the standard USB COM drivers, and no
external driver is needed. However, on Windows, a.inf file is required. The Arduino software
includes a serial monitor which allows simple textual data to be sent to and from the Arduino
board. The RX and TX LEDs on the board will flash when data is being transmitted via the
USB-to-serial chip and USB connection to the computer (but not for serial communication on
pins 0 and 1).

DEPT OF ECE TKREC 23


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

A Software Serial library allows for serial communication on any of the Uno's digital
pins.The Atmega328 also supports I2C (TWI) and SPI communication. The Arduino software
includes a Wire library to simplify use of the I2C bus; see the documentation for details. For
SPI communication, use the SPI library.
4.1.8 Programming

The Arduino Uno can be programmed with the Arduino software.The ATmega328 on
the Arduino Nano comes preburned with a boot loader that allows you to upload new code to
it without the use of an external hardware programmer. It communicates using the original
STK500 protocol (reference, C header files).You can also bypass the boot loader and program
the microcontroller through the ICSP (In- Circuit Serial Programming) header; see these
instructions for details.
The ATmega16U2 (or 8U2 In the rev1”and ’ev2 boards) firmware source code is
available The Atmega16U2/8U2 is loaded with a DFU boot loader, which can be activated
by:On Rev1 boards: connecting the solder jumper on the back of the board (near the map of
Italy) and then resetting the 8U2.
On Rev2 or later boards: there is a resistor that pulling the 8U2/16U2 HWB line to
ground, making it easier to put into DFU mode. You can then use Atmel’s FLIP software
(Windows) or the DFU programmer (Mac OS X and Linux) to load a new firmware. Or you
can use the ISP header with an external programmer (overwriting the DFU boot loader). See
this user-contributed tutorial for more information.

4.1.9 Automatic (Software) Reset

Rather than requiring a physical press of the reset button before an upload, the Arduino
Nano is designed in a way that allows it to be reset by software running on a connected
computer. One of the hardware flow control lines (DTR) of theATmega8U2/16U2 is connected
to the reset line of the ATmega328 via a 100 nano farad capacitor. When this line is asserted
(taken low), the reset line drops long enough to reset the chip. The Arduino software uses this
capability to allow you to upload code by simply pressing the upload button in the Arduino
environment. This means that the boot loader can have a shorter timeout, as the lowering of
DTR can be well-coordinated with the start of the upload.

DEPT OF ECE TKREC 24


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

This setup has other implications. When the Nano is connected to either a computer
running Mac OS X or Linux, it resets each time a connection is made to it from software (via
USB). For the following half-second or so, the boot loader is running on the Nano. While it is
programmed to ignore malformed data (i.e. anything besides an upload of new code), it will
intercept the first few bytes of data sent to the board after a connection is opened. If a sketch
running on the board receives one-time configuration or other data when it first starts, make sure
that the software with which it communicates waits a second after opening the connection and
before sending this data. The Nano contains a trace that can be cut to disable the auto-reset. The
pads on either side of the trace can be soldered together to re-enable it. It's labeled "RESET- EN".
You may also be able to disable the auto-reset by connecting a 110 ohm resistor from 5V to the
reset line.

USB Over current Protection

The Arduino Nano has a resettable polyfuse that protects your computer's USB ports
from shorts and overcurrent. Although most computers provide their own internal protection, the
fuse provides an extra layer of protection. If more than 500 mA is applied to the USB port, the
fuse will automatically break the connection until the short or overload is removed.

4.2 RFID READER EM-18

EM-18 RFID reader is one of the commonly used RFID reader to read 125KHz tags. It
features low cost, low power consumption, small form factor and easy to use. It provides both
UART and Wiegand26 output formats. It can be directly interfaced with microcontrollers using
UART and with PC using an RS232 converter.
Working of EM-18 RFID moduleThe module radiates 125KHz through its coils and when a
125KHz passive RFID tag is brought into this field it will get energized from this field. These
passive RFID tags mostly consist of CMOS IC EM4102 which can get enough power for its
working from the field generated by the reader.

DEPT OF ECE TKREC 25


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig. 4.2.3: RFID – System Principle

By changing the modulation current through the coils, tag will send back the information
contained in the factory programmed memory a

Fig. 4.2.4: Passive RFID Tag

DEPT OF ECE TKREC 26


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Block Diagram

Fig. 4.2.5: Circuit Diagram Of EM-18 RFID Reader Module

EM-18 RFID Reader Module – Bottom View

Pin No. Name Function

1 VCC 5V

2 GND Ground

3 BEEP BEEP and LED

4 ANT No Use

5 ANT No Use

6 SEL HIGH selects RS232, LOW selects WEIGAND

7 TX UART TX, When RS232 is Selected

8 D1 WIEGAND Data 1

9 D0 WIEGAND Data 0

DEPT OF ECE TKREC 27


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig. 4.2.6: Interfacing EM-18 RFID Reader Module with Node MCU –Circuit Diagram
Breadboard Wiring.

Fig. 4.2.7: Interfacing EM-18 RFID Reader with Node MCU – On Board

DEPT OF ECE TKREC 28


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.3 SERVO MOTOR

A servo motor is an electrical device which can push or rotate an object with great
precision. If you want to rotate and object at some specific angles or distance, then you use servo
motor. It is just made up of simple motor which run through servo mechanism. If motor is used
is DC powered then it is called DC servo motor, and if it is AC powered motor then it is called
AC servo motor. Doe to these features they are being used in many applications like toy car, RC
helicopters and planes, Robotics, Machine etc.
Servo motors are rated in kg/cm (kilogram per centimeter) most hobby servo motors are rated at
3kg/cm or 6kg/cm or 12kg/cm. This kg/cm tells you how much weight your servo motor can lift
at a particular distance. For example: A 6kg/cm Servo motor should be able to lift 6kg if the load
is suspended 1cm away from the motors shaft, the greater the distance the lesser the weight
carrying capacity.

The position of a servo motor is decided by electrical pulse and its circuitry is
placed beside the motor.

Servo Mechanism

It consists of three parts:

1. Controlled device
2. Output sensor
3. Feedback system
Fig. 4.3: Servo Motor

It is a closed loop system where it uses positive feedback system to control motion and final
position of the shaft. Here the device is controlled by a feedback signal generated by comparing
output signal and reference input signal.

Here reference input signal is compared to reference output signal and the third signal is
produces by feedback system. And this third signal acts as input signal to control device. This
signal is present as long as feedback signal is generated or there is difference between reference
input signal and reference output signal.

DEPT OF ECE TKREC 29


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Working principle of Servo Motors

A servo consists of a Motor (DC or AC), a potentiometer, gear assembly and a


controlling circuit. First of all we use gear assembly to reduce RPM and to increase torque of
motor. Say at initial position of servo motor shaft, the position of the potentiometer knob is such
that there is no electrical signal generated at the output port of the potentiometer. Now an
electrical signal is given to another input terminal of the error detector amplifier. This error
signal acts as the input for motor and motor starts rotating. Now motor shaft is connected with
potentiometer and as motor rotates so the potentiometer and it will generate a signal. So as the
potentiometer’s angular position changes, its output feedback signal changes

Controlling Servo Motor:

All motors have three wires coming out of them. Out of which two will be used for
Supply (positive and negative) and one will be used for the signal that is to be sent from the
MCU.

Servo motor is controlled by PWM (Pulse with Modulation) which is provided by the
control wires. There is a minimum pulse, a maximum pulse and a repetition rate. Servo motor
can turn 90 degree from either direction form its neutral position. For example, a 1.5ms pulse
will make the motor turn to the 90° position, such as if pulse is shorter than 1.5ms shaft moves to
0° and if it is longer than 1.5ms than it will turn the servo to 180°.

Servo motor works on PWM (Pulse width modulation) principle, means its angle of
rotation is controlled by the duration of applied pulse to its Control PIN. Basically servo motor is
made up of DC motor which is controlled by a variable resistor (potentiometer) and some
gears. High speed force of DC motor is converted into torque by Gears. We know that WORK=
FORCE X DISTANCE, in DC motor Force is less and distance (speed) is high and in Servo,
force is High and distance is less. Potentiometer is connected to the output shaft of the Servo, to
calculate the angle and stop the DC motor on required angle he servo motor is most commonly
used for high technology devices in the industrial applications like automation technology.
Thus this blog discusses the definition, types, mechanism, principle, working, controlling,
and lastly the applications of a servo machine. A servo motor is a rotary actuator or a motor that

DEPT OF ECE TKREC 30


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

allows for a precise control in terms of the angular position, acceleration, and velocity. Basically
it has certain capabilities that a regular motor does not have. Consequently it makes use of a
regular motor and pairs it with a sensor for position feedback .

Principle of working :

Servo motor works on the PWM ( Pulse Width Modulation ) principle, which means its
angle of rotation is controlled by the duration of pulse applied to its control PIN. Basically servo
motor is made up of DC motor which is controlled by a variable resistor (potentiometer) and
some gears.

Mechanism of servomotor :

Basically a servo motor is a closed-loop servomechanism that uses position feedback to


control its motion and final position. Moreover the input to its control is a signal ( either
analogue or digital ) representing the position commanded for the output shaft .
The motor is incorporates some type of encoder to provide position and speed feedback.
In the simplest case, we measure only the position. Then the measured position of the output is
compared with the command position, the external input to controller. Now If the output position
differs from that of the expected output, an error signal generates. Which then causes the motor
to rotate in either direction, as per need to bring the output shaft to the appropriate position. As
the position approaches, the error signal reduces to zero. Finally the motor stops.
The very simple servomotors can position only sensing via a potentiometer and bang-
bang control of their motor. Further the motor always rotates at full speed. Though this type of
servomotor doesn’t have many uses in industrial motion control, however it forms the basis of
simple and cheap servo used for radio control models.
Servomotors also find uses in optical rotary encoders to measure the speed of output shaft
and a variable-speed drive to control the motor speed. Now this, when combined with a PID
control algorithm further allows the servomotor to be in its command position more quickly and
more precisely with less overshooting .

DEPT OF ECE TKREC 31


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Working of servomotors :
Servo motors control position and speed very precisely. Now a potentiometer can sense
the mechanical position of the shaft. Hence it couples with the motor shaft through gears. The
current position of the shaft is converted into electrical signal by potentiometer, and is compared
with the command input signal. In modern servo motors, electronic encoders or sensors sense the
position of the shaft .
We give command input according to the position of shaft . If the feedback signal differs
from the given input, an error signal alerts the user. We amplify this error signal and apply as the
input to the motor, hence the motor rotates. And when the shaft reaches to the require position ,
error signal become zero , and hence the motor stays standstill holding the position.
The command input is in form of electrical pulses . As the actual input to the motor is the
difference between feedback signal ( current position ) and required signal, hence speed of the
motor is proportional to the difference between the current position and required position . The
amount of power require by the motor is proportional to the distance it needs to travel .

Controlling of servomotors :
Usually a servomotor turns 90 degree in either direction hence maximum movement can
be 180 degree . However a normal servo motor cannot rotate any further to a build in mechanical
stop.
We take three wires are out of a servo : positive , ground and control wire. A servo motor
is control by sending a pulse width modulated(PWM) signal through the control wire . A pulse is
sent every 20 milliseconds. Width of the pulses determine the position of the shaft .
for example ,
A pulse of 1ms will move the shaft anticlockwise at -90 degree , a pulse of 1.5ms will
move the shaft at the neutral position that is 0 degree and a pulse of 2ms will move shaft
clockwise at +90 degree.

DEPT OF ECE TKREC 32


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig. 4.3.1: Variable pulse width control servo motor


When we command a servo motor to move by applying pulse of appropriate width, the
shaft moves to and holds the require position of the shaft. However the motor resists to change .
Pulses need repetition for the motor to hold the position .

Applications :
1. Robotics : At every joint of the robot, we connect a servomotor. Thus giving the robot arm its
precise angle.
2. Conveyor belts : servo motors move , stop , and start conveyor belts carrying product along to
various stages , for example , in product packaging/ bottling, and labelling .
3. Camera auto focus : A highly precise servo motor build into the camera corrects a camera lens
to sharpen out of focus images.
4. Solar tracking system : Servo motors adjust the angle of solar panels throughout the day and
hence each panel continues to face the sun which results in harnessing maximum energy from
sunup to sundown .

DEPT OF ECE TKREC 33


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.4 LIGHT EMITTING DIODE(LED)

A light-emitting diode (LED) is a two- lead semiconductor light source. It is a p–n


junction diode that emits light when activated.[5] When a suitable current is applied to the leads,
electrons are able to recombine with electron holes within the device, releasing energy in the form
of photons. This effect is called electroluminescence, and the color of the light (corresponding to
the energy of the photon) is determined by the energy band gap of the semiconductor. LEDs are
typically small (less than 1 mm2) and integrated optical components may be used to shape the
radiation pattern.

Appearing as practical electronic components in 1962, the earliest LEDs emitted low-
intensity infrared light. Infrared LEDs are still frequently used as transmitting elements in remote-
control circuits, such as those in remote controls for a wide variety of consumer electronics. The
first visible-light LEDs were of low intensity and limited to red. Modern LEDs are available across
the visible, ultraviolet, and infrared wavelengths, with very high brightness.

Early LEDs were often used as indicator lamps for electronic devices, replacing small
incandescent bulbs. They were soon packaged into numeric readouts in the form of seven-segment
displays and were commonly seen in digital clocks. Recent developments have produced LEDs
suitable for environmental and task lighting. LEDs have led to new displays and sensors, while
their high switching rates are useful in advanced communications technology.

LEDs have many advantages over incandescent light sources, including lower energy
consumption, longer lifetime, improved physical robustness, smaller size, and faster switching.
Light-emitting diodes are used in applications as diverse as aviation lighting, automotive
headlamps, advertising, general lighting, traffic signals, camera flashes, lighted wallpaper and
medical devices.[10] They are also significantly more energy efficient and, arguably, have fewer
environmental concerns linked to their disposal.[11][12]

Unlike a laser, the color of light emitted from an LED is neither coherent nor
monochromatic, but the spectrum is narrow with respect to human vision, and for most purposes
the light from a simple diode element can be regarded as functionally monochromatic LED

DEPT OF ECE TKREC 34


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

development began with infrared and red devices made with gallium arsenide. Advances in
materials science have enabled making devices with ever-shorter wavelengths, emitting light in a
variety of colors.

Fig. 4.4: Light-Emitting Diode (LED)

LEDs are usually built on an n-type substrate, with an electrode attached to the p-type layer
deposited on its surface. P-type substrates, while less common, occur as well. Many commercial
LEDs, especially GaN/InGaN, also use sapphire substrate.

Efficiency and operational parameters

Typical indicator LEDs are designed to operate with no more than 30–60 milliwatts (mW)
of electrical power. Around 1999, Philips Lumileds introduced power LEDs capable of continuous
use at one watt. These LEDs used much larger semiconductor die sizes to handle the large power
inputs. Also, the semiconductor dies were mounted onto metal slugs to allow for greater heat
dissipation from the LED die.

DEPT OF ECE TKREC 35


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

One of the key advantages of LED-based lighting sources is high luminous efficacy. White
LEDs quickly matched and overtook the efficacy of standard incandescent lighting systems. In
2002, Lumileds made five-watt LEDs available with luminous efficacy of 18–22 lumens per watt
(lm/W). For comparison, a conventional incandescent light bulb of 60–100 watts emits around
15 lm/W, and standard fluorescent lights emit up to 100 lm/W.

Wavelength range Typical efficiency Typical efficacy


Color
(nm) coefficient (lm/W)
Red 620 < λ < 645 0.39 72
Red-
610 < λ < 620 0.29 98
orange
Green 520 < λ < 550 0.15 93
Cyan 490 < λ < 520 0.26 75
Blue 460 < λ < 490 0.35 37

In September 2003, a new type of blue LED was demonstrated by Cree. This produced a
commercially packaged white light giving 65 lm/W at 20 mA, becoming the brightest white LED
commercially available at the time, and more than four times as efficient as standard
incandescents. In 2006, they demonstrated a prototype with a record white LED luminous efficacy
of 131 lm/W at 20 mA. Nichia Corporation has developed a white LED with luminous efficacy of
150 lm/W at a forward current of 20 mA.[80] Cree's XLamp XM-L LEDs, commercially available
in 2011, produce 100 lm/W at their full power of 10 W, and up to 160 lm/W at around 2 W input
power. In 2012, Cree announced a white LED giving 254 lm/W,[81] and 303 lm/W in March
2014.[82] Practical general lighting needs high-power LEDs, of one watt or more. Typical operating
currents for such devices begin at 350 mA.

United States Department of Energy (DOE) testing of commercial LED lamps designed to
replace incandescent lamps or CFLs showed that average efficacy was still about 46 lm/W in 2009
(tested performance ranged from 17 lm/W to 79 lm/W).

DEPT OF ECE TKREC 36


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4.5 PIEZO-BUZZER

A buzzer or beeper is an audio signaling device, which may be mechanical,


electromechanical, or piezoelectric. Typical uses of buzzers and beepers include alarms, timers and
confirmation of user input such as a mouse click or keystroke. A piezoelectric element may be
driven by an oscillating electronic circuit or other audio signal source, driven with a piezoelectric
audio amplifier. Sounds commonly used to indicate that a button has been pressed are a click, a ring
or a beep.
Initially this device was based on an electromechanical system which was identical to an
electric bell without the metal gong (which makes the ringing noise). Often these units were
anchored to a wall or ceiling and used the ceiling or wall as a sounding board. Another
implementation with some AC-connected devices was to implement a circuit to make the AC
current into a noise loud enough to drive a loudspeaker and hook this circuit up to a cheap 8-ohm
speaker. Nowadays, it is more popular to use a ceramic-based piezoelectric sounder like a Sonalert
which makes a high-pitched tone. Usually these were hooked up to "driver" circuits which varied
the pitch of the sound or pulsed the sound on and off.

Fig. 4.5: Piezo Buzzer

DEPT OF ECE TKREC 37


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

The word "buzzer" comes from the rasping noise that buzzers made when they were
electromechanical devices, operated from stepped-down AC line voltage at 50 or 60 cycles. Other
sounds commonly used to indicate that a button has been pressed are a ring or a beep. Some
systems, such as the one used on Jeopardy!, make no noise at all, instead using light.

Fig. 4.5.1: Structure Of Piezo Buzzer

Fig. 4.5.2: Theory Of Piezo Buzzer

Specifications:

Rated Voltage:
A piezo buzzer is driven by square waves (V p-p).Operating Voltage: For normal
operating. But it is not guaranteed to make the minimum SPL under the rated voltage.
Consumption Current:
The current is stably consumed under the regular operation. However, it normally takes
three times of current at the moment of starting to work.
DEPT OF ECE TKREC 38
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Capacitance:
A piezo buzzer can make higher SPL with higher capacitance, but it consumes more
electricity.
Sound Output:
The sound output is measured by decibel meter. Applying rated voltage and square waves,
and the distance of 10 cm.
Rated Frequency:
A buzzer can make sound on any frequencies, but we suggest that the highest and the most
stable SPL come from the rated frequency.
Operating Temp.:
Keep working well between -30℃ and +70℃.

DEPT OF ECE TKREC 39


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-5
SOFTWARE DESCRIPTION
5.1 creating project in arduino 1.7.11 version
5.1.1 ARDUINO IDE INSTALLATION:
In thise we will get know of the process of installation of Arduino IDE and connecting
Arduino uno to Arduino IDE.
Step 1
First we must have our Arduino board (we can choose our favorite board) and a USB cable.
In case we use Adriana UNO, Arduino Duemilanove, Nano, Arduino Mega 2560, or Diecimila, we
will need a standard USB cable (A plug to B plug), In case we use Arduino Nano, we will need an
A to Mini-B cable.
Step 2 − Download Arduino IDE Software. We can get different versions of Arduino IDE from
the Download page on the Arduino Official website. We must select wer software, which is
compatible with we operating system (Windows, IOS, or Linux).
After wear file download is complete, unzip the file.

Fig. 5.1.1.1: Download Arduino IDE Software

Step 3 − Power up our board.


The Arduino Uno, Mega, Duemilanove and Arduino Nano automatically draw power from
either, the USB connection to the computer or an external power supply. If we are using an
Arduino Diecimila, we have to make sure that the board is configured to draw power from the
DEPT OF ECE TKREC 40
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

USB connection. The power source is selected with a jumper, a small piece of plastic that fits onto
two of the three pins between the USB and power jacks.
Check that it is on the two pins closest to the USB port.
Connect the Arduino board to wer computer using the USB cable. The green power LED
(labeled PWR) should glow.
Step 4 − Launch Arduino IDE.

Fig. 5.1.1.2: Launch Arduino IDE.

After our Arduino IDE software is downloaded, we need to unzip the folder. Inside the folder, we
can find the application icon with an infinity label (application.exe).
Double click the icon to start the IDE.
Step 5 − Open our first project.
Once the software starts, we have two options 1)Create a new project

Fig. 5.1.1.3: Open our first project.


DEPT OF ECE TKREC 41
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

* Open an existing project example.


To create a new project, select File → New.
To open an existing project example, select File → Example → Basics → Blink.
Here, we are selecting just one of the examples with the name Blink. It turns the LED on and off
with some time delay. We can select any other example from the list.
Step 6 − Select our Arduino board.

Fig. 5.1.1.4: Select our Arduino board.

DEPT OF ECE TKREC 42


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

To avoid any error while uploading wear program to the board, we must select the correct
Arduino board name, which matches with the board connected to wer computer.
Go to Tools → Board and select wear board.
Here, we have selected Arduino Uno board according to our tutorial, but we must select
the name matching the board that we are using.
Step 7 − Select the serial port.

Fig. 5.1.1.5: Select serial port.

DEPT OF ECE TKREC 43


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Select the serial device of the Arduino board. Go to Tools → Serial Port menu. This is likely to be
COM3 or higher (COM1 and COM2 are usually reserved for hardware serial ports). To find out,
we can disconnect the Arduino board and re-open the menu, the entry that disappears should be of
the Arduino board. Reconnect the board and select that serial port.
Step 8 − Upload the program to the board.
Before explaining how we can upload our program to the board, we must demonstrate the function
of each symbol appearing in the Arduino IDE toolbar.

Fig. 5.1.1.6: Upload the program to the board.

A − Used to check if there is any compilation error.


B − Used to upload a program to the Arduino board.
C − Shortcut used to create a new sketch.
D − Used to directly open one of the example sketch.
E − Used to save wer sketch.
F − Serial monitor used to receive serial data from the board and send the serial data to the board.
Now, simply click the "Upload" button in the environment. Wait a few seconds; we will see the
RX and TX LEDs on the board, flashing. If the upload is successful, the message
"Done uploading" will appear in the status bar.
Note − If we have an Arduino Mini, NG, or other board, we need to press the reset button
physically on the board, immediately before clicking the upload button on the Arduino Software.

DEPT OF ECE TKREC 44


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

5.1.2 Anaconda IDE Installation for windows

1. Click on the link below to open the download page


https://www.anaconda.com/download/#windows

Fig. 5.1.2.1: Download page


2. Click on the Download button and check for the compatibility of your system. Then, it
will start downloading

Fig. 5.1.2.2: Anaconda Installers


3. Double click the installer to launch.

DEPT OF ECE TKREC 45


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

4. Click on Next.

Fig. 5.1.2.3: Anaconda Setup

5.Read the license agreement and click on “I Agree”.

Fig. 5.1.2.4: License Agreement

DEPT OF ECE TKREC 46


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

5. Select installation type “Just Me” unless you’re installing it for all users (which require
Windows Administrator privileges) and click on Next.

Fig. 5.1.2.5: Installation Type

6.Select a destination folder to install Anaconda and click the Next button.

Fig. 5.1.2.6: Destination Folder

DEPT OF ECE TKREC 47


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

2. Choose whether to add Anaconda to your PATH environment variable. We recommend


NOT adding Anaconda to the PATH environment variable, since this can interfere with
other softwares . Instead, use Anaconda software by opening Anaconda Navigator or
the Anaconda Prompt from the Start Menu

3. Click the Install button.

Fig. 5.1.2.7: Install

If you want to watch the packages Anaconda is installing, click on Show Details.

4. Click on the Next button.

Fig. 5.1.2.8: Anaconda +JetBrains

DEPT OF ECE TKREC 48


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

11. And then click the Finish button.

Fig. 5.1.2.9: Completing Anaconda Setup

4. After a successful installation you will see the “Thanks for installing Anaconda” dialog box.

OpenCV
OpenCV is a cross-platform library using which we can develop real-time computer
vision applications. It mainly focuses on image processing, video capture and analysis
including features like face detection and object detection.
Let’s start the chapter by defining the term "Computer Vision".
Computer Vision
Computer Vision can be defined as a discipline that explains how to reconstruct,
interrupt, and understand a 3D scene from its 2D images, in terms of the properties of the
structure present in the scene. It deals with modeling and replicating human vision using
computer software and hardware.
Computer Vision overlaps significantly with the following fields:
• Image Processing: It focuses on image manipulation.
• Pattern Recognition: It explains various techniques to classify patterns.
• Photogrammetry: It is concerned with obtaining accurate measurements from images.

DEPT OF ECE TKREC 49


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

5.1.3 Computer Vision Vs Image Processing


Image processing deals with image-to-image transformation. The input and output of
image processing are both images.
Computer vision is the construction of explicit, meaningful descriptions of physical
objects from their image. The output of computer vision is a description or an interpretation of
structures in 3D scene.

Applications Of Computer Vision


Here we have listed down some of major domains where Computer Vision is heavily
used.
Robotics Application
• Localization ─ Determine robot location automatically
• Navigation
• Obstacles avoidance
• Assembly (peg-in-hole, welding, painting)
• Manipulation (e.g. PUMA robot manipulator)
• Human Robot Interaction (HRI): Intelligent robotics to interact with and serve people
Medicine Application
• Classification and detection (e.g. lesion or cells classification and tumor detection) 2D/3D
segmentation
• 3D human organ reconstruction (MRI or ultrasound)
• Vision-guided robotics surgery
Industrial Automation Application
• Industrial inspection (defect detection)
• Assembly
• Barcode and package label reading
• Object sorting
• Document understanding (e.g. OCR)
Security Application
• Biometrics (iris, finger print, face recognition)
• Surveillance ─ Detecting certain suspicious activities or behaviors
Transportation Application
• Autonomous vehicle

DEPT OF ECE TKREC 50


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

• Safety, e.g., driver vigilance monitoring

Features of OpenCV Library


Using OpenCV library, you can –
• Read and write images
• Capture and save videos
• Process images (filter, transform)
• Perform feature detection
• Detect specific objects such as faces, eyes, cars, in the videos or images.
• Analyze the video, i.e., estimate the motion in it, subtract the background, and track objects in
it.
OpenCV was originally developed in C++. In addition to it, Python and Java bindings were
provided. OpenCV runs on various Operating Systems such as windows, Linux, OSx,
FreeBSD, Net BSD, Open BSD, etc. This tutorial explains the concepts of OpenCV with
examples using Java bindings.

OpenCV Library Modules

Following are the main library modules of the OpenCV library.


Core Functionality
This module covers the basic data structures such as Scalar, Point, Range, etc., that are
used to build OpenCV applications. In addition to these, it also includes the multidimensional
array Mat, which is used to store the images. In the Java library of OpenCV, this module is
included as a package with the name org.opencv.core.
Image Processing
This module covers various image processing operations such as image filtering,
geometrical image transformations, color space conversion, histograms, etc. In the Java library
of OpenCV, this module is included as a package with the name org.opencv.imgproc.
Video
This module covers the video analysis concepts such as motion estimation, background
subtraction, and object tracking. In the Java library of OpenCV, this module is included as a
package with the name org.opencv.video.
Video I/O
This module explains the video capturing and video codecs using OpenCV library. In

DEPT OF ECE TKREC 51


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

the Java library of OpenCV, this module is included as a package with the name
org.opencv.videoio.

Calib3d
This module includes algorithms regarding basic multiple-view geometry algorithms,
single and stereo camera calibration, object pose estimation, stereo correspondence and
elements of 3D reconstruction. In the Java library of OpenCV, this module is included as a
package with the name org.opencv.calib3d.
features2d
This module includes the concepts of feature detection and description. In the Java
library of OpenCV, this module is included as a package with the name org.opencv.features2d.
Objdetect
This module includes the detection of objects and instances of the predefined classes
such as faces, eyes, mugs, people, cars, etc. In the Java library of OpenCV, this module is
included as a package with the name org.opencv.objdetect.
Highgui
This is an easy-to-use interface with simple UI capabilities. In the Java library of
OpenCV, the features of this module is included in two different packages namely,
org.opencv.imgcodecs and org.opencv.videoio.
NumPy
NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting
of multidimensional array objects and a collection of routines for processing of array.
Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray
was also developed, having some additional functionalities. In 2005, Travis Oliphant created
NumPy package by incorporating the features of Numarray into Numeric package. There are
many contributors to this open source project.

Operations using NumPy

Using NumPy, a developer can perform the following operations:

• Mathematical and logical operations on arrays.

• Fourier transforms and routines for shape manipulation.

• Operations related to linear algebra. NumPy has in-built functions for linear algebra and

DEPT OF ECE TKREC 52


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

random number generation.

NumPy – A Replacement for MatLab

NumPy is often used along with packages like SciPy (Scientific Python) and
Mat−plotlib (plotting library). This combination is widely used as a replacement for MatLab, a
popular platform for technical computing. However, Python alternative to MatLab is now seen
as a more modern and complete programming language.

It is open source, which is an added advantage of NumPy. Standard Python distribution


doesn't come bundled with NumPy module. A lightweight alternative is to install NumPy using

pip install numpy

popular Python package installer, pip.

The best way to enable NumPy is to use an installable binary package specific to your
operating system. These binaries contain full SciPy stack (inclusive of NumPy, SciPy,
matplotlib, IPython, SymPy and nose packages along with core Python).

Windows
Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy
stack. Itis also available for Linux and Mac.
Canopy (https://www.enthought.com/products/canopy/) is available as free as well as
commercial distribution with full SciPy stack for Windows, Linux and Mac.
Python (x,y): It is a free Python distribution with SciPy stack and Spyder IDE for Windows OS.
(Downloadable from http://python-xy.github.io/)

Linux
Package managers of respective Linux distributions are used to install one or more
packages in SciPy stack.

sudo apt-get install python-numpy python-scipy python-matplotlibipythonipython-


notebook python-pandas python-sympy python-nose

DEPT OF ECE TKREC 53


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

For Ubuntu

sudo yum install numpyscipy python-matplotlibipython python-pandas sympy python-


nose atlas-devel

For Fedora
Building from Source
Core Python (2.6.x, 2.7.x and 3.2.x onwards) must be installed with distutils and zlib
module should be enabled.
GNU gcc (4.2 and above) C compiler must be available.

To install NumPy, run the following command.

Python setup.py install

To test whether NumPy module is properly installed, try to import it from Python prompt.

import numpy

If it is not installed, the following error message will be displayed.

Traceback (most recent call last):


File "<pyshell#0>", line 1, in <module>
import numpy
ImportError: No module named 'numpy'

Alternatively, NumPy package is imported using the following syntax:

import numpy as np

5.2 PYTHON SOFTWARE


Python is an interpreter, high-level, universally useful programming language. Made by
Guido van Rossum and first discharged in 1991, Python has a plan reasoning that underlines
code comprehensibility, strikingly utilizing huge whitespace. It gives develops that empower

DEPT OF ECE TKREC 54


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

clear programming on both little and huge scales. Van Rossum drove the language network
until venturing down as pioneer in July 2018.
Python includes a dynamic kind framework and programmed memory the board. It
underpins different programming ideal models, including object-oriented, basic, useful and
procedural, and has a huge and complete standard library.

Python translators are accessible for some operating frameworks. CPython, the
reference execution of Python, is open source programming and has a network based
improvement model, as do about the majority of Python's different usage. Python is a
universally useful translated, intelligent, object-oriented, and high-level programming language.
It was made by Guido van Rossum during 1985-1990. Like Perl, Python source code is
additionally accessible under the GNU General Public License (GPL).
Python is a simple to adapt, ground-breaking programming language. It has proficient
high-level information structures and a straightforward, however successful way to deal with
article situated programming. Python's exquisite sentence structure and dynamic composing,
together with its deciphered nature, make it a perfect language for scripting and quick
application improvement in numerous regions on generally stages.
The Python interpreter is effectively reached out with new capacities and information
types executed in C or C++ (or different languages callable from C). Python is likewise
appropriate as an augmentation language for adaptable applications.
5.2.1 WRITING A PYTHON PROGRAM
Python programs must be written with a particular structure. The syntax must be
correct, or the interpreter will generate error messages and not execute the program. This
section introduces Python by providing a simple example program.
Listing 1.1 (simple.py) is one of the simplest Python programs that does something:

Fig. 5.2: WRITING A PYTHON PROGRAM

We will consider two ways in which we can run Listing 1.1 (simple.py):
1. Enter the program directly into an IDLE’s interactive shell and
2. Enter the program into an IDLE’s editor, save it, and run it.

DEPT OF ECE TKREC 55


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

IDLE’s interactive shell:


IDLE is a simple Python coordinated implementation accessible for Windows, Linux,
and Mac OS X. Figure 1.1 tells the best way to begin IDLE from the Microsoft Windows Start
menu. The IDLE intelligent shell is appeared in Figure 1.2. You may type the over one line
Python program legitimately into IDLE and press enter to execute the program. Figure 1.3
demonstrates the outcome utilizing the IDLE intelligent shell. Since it doesn't give an approach
to spare the code you enter, the intuitive shell isn't the best device for composing bigger
projects. The IDLE intelligent shell is helpful for trying different things with little pieces of
Python code.

IDLE editor:
Inert has a worked in editor. From the IDLE menu, select New Window, as appeared in
Figure 1.4. Type the content as appeared in Listing 1.1 (simple.py) into the editor. Figure 1.5
demonstrates the subsequent editor window with the content of the simple Python program.
You can spare your program utilizing the Save alternative in the File menu as appeared in
Figure 1.6. Spare the code to a record named simple.py. The genuine name of the record is
unimportant, however the name "simple" precisely portrays the idea of this program. The
augmentation .py is the expansion utilized for Python source code. We can run the program
from inside the IDLE editor by squeezing the F5 capacity key or from the editor's Run menu:
Run→Run Module. The yield shows up in the IDLE intuitive shell window.

Fig. 5.2.1: Launching IDLE from the Windows Start menu

DEPT OF ECE TKREC 56


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig 5.2.2: The IDLE interpreter Window

5.3 INVOKING THE INTERPRETER

The Python interpreter is usually installed as /usr/local/bin/python3.7 on those machines


where it is available; putting /usr/local/bin in your Unix shell’s search path makes it possible to
start it by typing the command: to the shell. Since the choice of the directory where the
interpreter lives is an installation option, other places are possible; check with your local
Python guru or system administrator. (E.g., /usr/local/python is a popular alternative location.)
On Windows machines, the Python installation is usually placed in C:\Program
Files\Python37\, though you can change this when you’re running the installer. To add this
directory to your path, you can type the following command into the command prompt in a
DOS box

Typing an end-of-file character (Control-D on Unix, Control-Z on Windows) at the


primary prompt causes the interpreter to exit with a zero exit status. If that doesn’t work, you
can exit the interpreter by typing the following command: quit(). The interpreter’s line-editing
features include interactive editing, history substitution and code completion on systems that
support readline.

DEPT OF ECE TKREC 57


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Perhaps the quickest check to see whether command line editing is supported is typing
Control-P to the first Python prompt you get. If it beeps, you have command line editing; see
Appendix Interactive Input Editing and History Substitution for an introduction to the keys. If
nothing appears to happen, or if ^P is echoed, command line editing isn’t available; you’ll only
be able to use backspace to remove characters from the current line. The interpreter operates
somewhat like the Unix shell: when called with standard input connected to a tty device, it
reads and executes commands interactively; when called with a file name argument or with a
file as standard input, it reads and executes a script from that file.

A second way of starting the interpreter is python -c command [arg] ..., which executes
the statement(s) in command, analogous to the shell’s -c option. Since Python statements often
contain spaces or other characters that are special to the shell, it is usually advised to quote
command in its entirety with single quotes. Some Python modules are also useful as scripts.
These can be invoked using python -m module [arg] ..., which executes the source file for
module as if you had spelled out its full name on the command line. When a script file is used,
it is sometimes useful to be able to run the script and enter interactive mode afterwards. This
can be done by passing -i before the script.

Argument Passing
When known to the interpreter, the script name and additional arguments thereafter are
turned into a list of strings and assigned to the argv variable in the sys module. You can access
this list by executing import sys. The length of the list is at least one; when no script and no
arguments are given, sys.argv[0] is an empty string. When the script name is given as '-'
(meaning standard input), sys.argv[0] is set to '-'. When -c command is used, sys.argv[0] is set
to '-c'. When -m module is used, sys.argv[0] is set to the full name of the located module.
Options found after -c command or -m module are not consumed by the Python interpreter’s
option processing but left in sys.argv for the command or module to handle.

Interactive Mode
When commands are read from a tty, the interpreter is said to be in interactive mode. In
this mode it prompts for the next command with the primary prompt, usually three greater-than
signs (>>>); for continuation lines it prompts with the secondary prompt, by default three dots

DEPT OF ECE TKREC 58


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

(...). The interpreter prints a welcome message stating its version number and a copyright notice
before printing the first prompt

The Interpreter and Its Environment

By default, Python source files are treated as encoded in UTF-8. In that encoding,
characters of most languages in the world can be used simultaneously in string literals,
identifiers and comments — although the standard library only uses ASCII characters for
identifiers, a convention that any portable code should follow. To display all these characters
properly, your editor must recognize that the file is UTF-8, and it must use a font that supports
all the characters in the file.

a. INSTALLING PYTHON
Go to www.python.org and download the latest version of Python (version 3.5 as of this
writing). It should be painless to install. If you have a Mac or Linux, you may already have
Python on your computer, though it may be an older version. If it is version 2.7 or earlier, then
you should install the latest version, as many of the programs in this book will not work
correctly on older versions.

b. IDLE
IDLE is a simple integrated development environment (IDE) that comes with Python. It’s a
program that allows you to type in your programs and run them. There are other IDEs for
Python, but for now I would suggest sticking with IDLE as it is simple to use. You can find
IDLE in the Python 3.4 folder on your computer.

When you first start IDLE, it starts up in the shell, which is an interactive window where
you can type in Python code and see the output in the same window. I often use the shell in
place of my calculator or to try out small pieces of code. But most of the time you will want to
open up a new window and type the program in there.

Note At least on Windows, if you click on a Python file on your desktop, your system will
run the program, but not show the code, which is probably not what you want. Instead, if you

DEPT OF ECE TKREC 59


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

right-click on the file, there should be an option called Edit with Idle. To edit an existing
Python file, either do that or start up IDLE and open the file through the File menu.

Fig.5.3:INVOKING THE INTERPRETER

DEPT OF ECE TKREC 60


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-6
RESULTS AND DISCUSSION

The Results obtained are based on the live and continues inputs given to it. The below
figures represent a sample image of the result obtained in which it signifies the description of
vehicle classified as bus/car/cycle e.c.t along with the amount associated with it. The results
obtained can be reliable since it also differentiates between vehicles and others moving
bodies.

DEPT OF ECE TKREC 61


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

Fig 6.1 Detection of CAR

Fig 6.2 Detection of bus

DEPT OF ECE TKREC 62


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

CHAPTER-7
CONCLUSION AND FUTURE WORK

CONCLUSION

This research designs a classification system to determine object as specific type of


vehicle. Haar Cascade Classifier proposed to detect an object. The experimental results, gave
satisfactory and which makes the proposed rapid and robust system for the detection of
vehicles.’

FUTURE SCOPE:
Machines are used in every part of human life. Machines work according to us but in
today’s world, we work according to machines. The rush to soar high is immense. Hence,
machines are important and so are the parts of them. If the parts do not fit well a machine cannot
work properly. The dimensions of the objects sure make a great impact. This AI IOT based
project will help in measuring the dimensions in real-time. It is convenient and easy to use. It
also gives accuracy and assurance of the manufactured product. As it is a one-time investment it
surely has a great future scope
CHALLENGES:
The main purpose is to recognize a specific object in real time from a large number of
objects. Most recognition systems are poorly scalable with many recognizable objects.
Computational cost rises as the number of objects increases. Comparing and querying images
using color, texture, and shape are not enough because two objects might have same attributes.
Designing a recognition system with abilities to work in the dynamic environment and behave
like a human is difficult. Some main challenges to design object recognition system are lighting,
dynamic background, the presence of shadow, the motion of the camera, the speed of the
moving objects, and intermittent object motion weather conditions etc.

DEPT OF ECE TKREC 63


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

ADVANTAGES:
1. It is economically cheep.
2. It is fast.
3. It reduces man error and increases proficiency.
4. It can be used easily.
5. Less error is directly proportional to more profit.
6. Not expensive that is it is low cost only a webcam is required.
APPLICATIONS:
➢ Mainly used in toll collection of vehicles and also used in other sectors such as :
1 It’s is used in defence.
2 Used in laboratories.
3 Used in manufacturing industries.
4 Used in textile industry.
5 Used in aerospace system.

DEPT OF ECE TKREC 64


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

REFERENCES

A. Hasnat, N. Shvai, A. Meicler, P. Maarek, A. Nakib, "New vehicle classification method


based on hybrid classifiers" in IEEE Int. Conf on Image Processing, IEEE, pp. 3084-3088,
2018.

1. N. Shvai, A. Meicler, A. Hasnat, E. Machover, P. Maarek, S. Loquet, A. Nakib,


"Optimal ensemble classifiers based classification for automatic vehicle type recognition",
2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2018.

2. J. Fang, Y. Zhou, Y. Yu, S. Du, "Fine-grained vehicle model recognition using a coarse-
to-fine convolutional neural network architecture", IEEE Trans. Intell. Transp. Syst., vol.
18, no. 7, pp. 1782-1792, 2017.

3. L. Jiang, J. Li, L. Zhuo, Z. Zhu, "Robust vehicle classification based on the


combination of deep features and handcrafted features" in Trustcom/BigDataSE/ICESS
2017 IEEE, IEEE, pp. 859-865, 2017.

4. M. Biglari, A. Soleimani, H. Hassanpour, "A cascaded part-based system for fine-


grained vehicle classification", IEEE Trans. Intell. Transp. Syst., vol. 19, no. 1, pp. 273-
283, 2018.

5. Varsha,Amit Kumar Mishra, Binita Pareek ”Automated Approach for Toll Detection Using
Circular Hough Transform and Scalar Sharpness Index”2019 4th International Conference on
Internet of Things: Smart Innovation and Usages (IoT-SIU)

6. Debkumar Chowdhury ; Souraneel Mandal ; Dona Das ; Soumya Banerjee ; Sourath


Shome ; Devlina Choudhary“An Adaptive Technique for Computer Vision Based Vehicles
License Plate Detection System”2019 International Conference on Opto-Electronics and
Applied Optics (Optronix)

DEPT OF ECE TKREC 65


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

APPENDICES

A. SOURCE CODE

import cv2
thres = 0.45 # Threshold to detect object
import time
import serial
arduino = serial.Serial("COM3",9600)
time.sleep(2)

cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)

classNames = []
classFile = 'coco.names'
with open(classFile,'rt') as f:
classNames = f.read().rstrip('\n').split('\n')

configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'

weightsPath = 'frozen_inference_graph.pb'

net = cv2.dnn_DetectionModel(weightsPath,configPath)
net.setInputSize(320,320)
net.setInputScale(1.0/ 127.5)
net.setInputMean(127.5)
net.setInputSwapRB(True)

while True:
success,img = cap.read()
classIds, confs, bbox = net.detect(img, confThreshold = thres)

if len(classIds) != 0:
for classId, confidence,box in zip(classIds.flatten(),confs.flatten(),bbox):
cv2.rectangle(img,box,color=(0,255,0),thickness=2)

if(classNames[classId-1].upper() == "CAR"):

DEPT OF ECE TKREC 66


Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network

print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("1 data sent")
arduino.write(b'1')

time.sleep(1)
if(classNames[classId-1].upper() == "BUS"):
print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("2 data sent")
arduino.write(b'2')

time.sleep(1)

if(classNames[classId-1].upper() == "TRUCK"):
print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("2 data sent")
arduino.write(b'2')

time.sleep(1)
else:
print("0 data sent")
arduino.write(b'0')

cv2.imshow("Original",img)
cv2.waitKey(1)

#3 Car
#6 Bus
#8 truck

DEPT OF ECE TKREC 67

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy