1051791158741317
1051791158741317
1 INRODUCTION 1
1.1OUTLINE OF THE PROJECT 2
1.2 MOTIVATION 2
1.3 PROBLEM STATEMENT 2
3 METHODOLOGY 6
3.1 AIM 6
3.2 SCOPE 6
3.2.1 DRAWBACKS OF EXISTING
SYSTEM 6
3.3 HARDWARE REQUIRMENTS 6
3.4 SOFTWARE REQUIREMENTS 7
3.5 STEM DESIGN 8
3.6 MODULE IMPLEMENTATION 9
3.6.1 DATA PRE-PROCESSING 9
3.6.2 FEATURE EXTRACTION 9
3.6.3 SLIDING WINDOWS 9
3.7 HAAR CASCADE ALGORITHM 10
3.7.1 HAAR FEATURE SELECTION 10
3.7.2 CREATING INTEGRAL IMAGES 11
3.7.3 ADABOOST TRAINING 11
3.7.4 CASCADING CLASSIFIERS 11
3.8 APPLICATIONS 12
i
3.8.1 VIRTUAL PERSONAL 1
ASSISTANCE
3.8.2 TRAFFIC PREDICTIONS 13
ii
REFERENCES 65
APPENDICES 66
A. SOURCE CODE 66-67
iii
ABSTRACT
Toll Vehicle Classification is an important task. Indeed, it has many uses in traffic
management and toll collection systems. In this paper, Vinci Auto-roots group Networks
(the biggest French Highways concession) are considered, where every year, millions of
vehicles are classified in real time. Then, a small decrease in classification performance
can have serious economic losses. Therefore, the accuracy and the time complexity
become critical for the toll collection system. The current classification algorithm uses the
scene features to detect vehicles classes. However, it requires a large labelled dataset, and
has a limitation when multiple vehicles are in the scene. Here in, we propose a novel
context-aware vehicle classification method that takes profit from the semantic spatial
relationship of the objects. The experiments show that our method is performing as
accurately as the existing model with significantly lower labelled datasets (74 times
smaller). Moreover, the obtained accuracy of the proposed method is 99.97% compared to
99.79% achieved by the current method when using the same training set. We apply Haar
Cascade algorithm to detect the vehicle classification.
iv
LIST OF FIGURES
vi
LIST OF ABBREVIATION
1 AI ARTIFICIAL INTELLIGENCE
2 GPS GLOBAL POSITIONING SYSTEM
3 UML UNIFIED MODELLING LANGUAGE
vi
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network
CHAPTER-1
INTRODUCTION
India is a nation where we get the chance to watch most broad National parkways. The
private office required in the assembling of the foundation is allowed to charge residents. The
conditions of clog and wastefulness incited government to plan and actualize Electronic Toll
Collection (ETC) framework which can expel out these issues and encourage accommodation
for all who required during the time spent toll gathering straightforwardly or in a roundabout
way. Vehicle discovery, following, order and checking is vital for military, non military
personnel and government applications, for example, parkway observing, movement arranging,
toll gathering and activity stream. For the activity administration, vehicles discovery is the
basic stride. Some of these incorporate Manual toll accumulation, RF labels, Barcodes, Number
plate acknowledgment. Every one of these frameworks have disservices that prompt a few
blunders in the relating framework. The proposed framework plans to outline and build up
another proficient toll gathering framework which will be a decent minimal effort elective
among every single other framework. PC Vision based procedures are more appropriate in light
of the fact that these frameworks don't aggravate movement while establishment and they are
anything but difficult to change. A camera catches pictures of vehicles going through toll
corner subsequently a vehicle is recognized through camera. Contingent upon the territory
involved by the vehicle, arrangement of vehicles as light and substantial is finished.
India is a nation where we get the opportunity to watch most broad national interstates.
Government designs different stages to finish the undertakings under development. The private
organization required in the assembling of the foundation is allowed to charge subjects. PC
vision is an imperative field of counterfeit consciousness where this choice about true scene
having high dimensional information istaken. Many highway toll collection system have
already been developed and are widely used in India. Some of these include Manual toll
collection, RF tags, Barcodes, Number plate recognition. To capture the number plate image,
image processing is required. This can be done using Open CV technology.
The numerical or typical data of a scene is chosen in light of the suitable model
developed with the help of protest geometry, material science, measurement, and learning
hypothesis. The scene under thought is changed over into the image(s) or the video(s),
involving many pictures,
utilizing camera(s) concentrated from various areas on a scene. The different vision related
ranges, for example, seen recreation, occasion discovery, video following, question
acknowledgment, protest posture estimation and picture reclamation are considered as subareas
of PC vision. Additionally, different fields, for example, picture handling, picture investigation
and machine vision are likewise firmly identified with PC vision. The systems and uses of
different above said zones cover with each other. The picture substance are not translated in
picture preparing while in PC vision the elucidation of pictures is made in view of the
properties of the substance they contain. PC vision may incorporate examination of 3D pictures
from 2D.
Transportation nowadays is a primary need for every person in finding to most suitable
daily transportation. However, there is an existing huge problem. The uncontrolled personal
vehicle growth has become one of serious transportation problems. According to the
previously conducted research by Indonesia Ministry of Transportation, Indonesian vehicle
growth exhibits surprising results, 12% for motorcycle, 8.89% for car, and 2.2% for bus. The
vehicle detection is essential in intelligent systems that aims to detect potentially dangerous
situations with vehicles in advance to warn the driver.
1.2 MOTIVATION
Classification and detection of objects have been the state-of-art approach for many areas
in computer vision. In the domain of video surveillance classification of objects have been a
major breakthrough. The Haar classifier is able to detect vehicles and showed that the vehicle
detection performance was greatly improved with higher accuracy and robustness.
This paper presents a real-time vision framework that detects and tracks vehicles. The
framework consists of three main stages. Vehicles are first detected using Haar cascade
algorithm. In the second phase, an adaptive appearance-based model is built to dynamically
keep track of the detected vehicles and the third phase of data association to fuse the detection
and tracking results.
CHAPTER-2
LITERATURE SURVEY
The proposed method doesn't need GPU and has much greater convenience than Google Net.
The experimental results have demonstrated that for a specific task, the combination of the
deep features obtained from light-weight deep learning network and the handcrafted features
can achieve comparable or even higher performance compared to the deeper neural network.
Vehicle make and model recognition (VMMR) has become an important part of intelligent
transportation systems. VMMR can be useful when license plate recognition is not feasible or
fake number plates are used. VMMR is a hard, fine-grained classification problem, due to the
large number of classes, substantial inner-class, and small inter-class distance. A novel
cascaded part-based system has been proposed in this paper for VMMR. This system uses
latent support vector machine formulation for automatically finding the discriminative parts of
each vehicle category. At the same time, it learns a part-based model for each category. Our
approach employs a new training procedure, a novel greedy parts localization, and a practical
multi-class data mining algorithm. In order to speed up the system processing time, a novel
cascading scheme has been proposed. This cascading scheme applies confidence and
frequency. classifiers to the input image in a sequential manner, based on the two proposed
criteria: The cascaded system can run up to 80% faster with analogous accuracy in comparison
with the non-cascaded system. The extensive experiments on our data set and the Comp Cars
data set indicate the outstanding performance of our approach. The proposed approach
achieves an average accuracy of 97.01% on our challenging data set and an average accuracy
of 95.55% on Comp Cars data set.
CHAPTER-3
METHODOLOGY
3.1 AIM
There are two toll collection system are existing: All vehicle has to stop at toll plaza
along the highway to pay the toll, one person collects the money and issue a receipt, after
which gate is opened either mechanically or electronically for the driver to get through the
toll plaza. Another is smart card system in which driver show a smart card to access the data
to the system installed on toll plaza to pass.
3.2 SCOPE
Detection and classification of the vehicles and other objects of interest (e.g., toll
payment box). Predicting the scene class based on the spatial relationships among the
vehicles of interest and contextually important objects. This uses machine learning
techniques to get a high degree of accuracy from what is called “training data”. Haar
Cascades use the Adaboost learning algorithm which selects a small number of important
features from a large set to give an efficient result of classifiers.
The above system for collecting toll tax is time consuming method; there is long
queue of vehicle at toll plaza chances of escaping the payment of toll tax.
The most common set of requirements defined by any operating system or software
application is the physical computer resources, also known as hardware. A hardware
requirements list is often accompanied by a hardware compatibility list, especially in case of
operating systems. The minimal hardware requirements are as follows,
1. Processor: Pentium IV
2. RAM: 8 GB
7. Keyboard: 104Keys
Software requirements deals with defining resource requirements and prerequisites that
needs to be installed on a computer to provide functioning of an application. These
requirements are need to be installed separately before the software is installed. The
minimal software requirements are as follows,
Architecture
Feature Extraction
Model Training
Sliding Windows
Results
It is a technique that is used to convert the raw data into a clean data set. In other words,
whenever the data is gathered from different sources it is collected in raw format which is
not feasible for the analysis.
It is the process of transforming the raw pixel values from an image, to a more
meaningful and useful information that can be used in other techniques, such as point
matching or machine learning.
The technique can be best understood with the window pane in bus, consider a window
of length n and the pane which is fixed in it of length k. Consider, initially the pane is at
extreme left i.e., at 0 units from the left. Now, co-relate the window with array array [] of
size n and pane with current sum of size k elements. Now, if we apply force on the window
such that it moves a unit distance ahead. The pane will cover next k consecutive elements.
Consider an array array [] = {5, 2, -1, 0, 3} and value of k = 3 and n = 5
Applying sliding window technique:
3.6.3.1 We compute the sum of first k elements out of n terms using a linear loop and store the
sum in variable window sum.
3.6.3.2 Then we will graze linearly over the array till it reaches the end and simultaneously
keep track of maximum sum.
3.6.3.3 To get the current sum of block of k elements just subtract the first element from the
previous block and add the last element of the current block. The below representation will make it
clear how the window slides over the array. This is the initial phase where we have calculated the
initial window sum starting from index 0 . At this stage the window sum is 6. Now, we set the maximum
sum as current window 8 i.e.
DEPT OF ECE TKREC 9
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network
Haar Cascade is a machine learning object detection algorithm used to identify objects
in an image or video and based on the concept of features proposed by Paul Viola and
Michael Jones in their paper "Rapid Object Detection using a Boosted Cascade of Simple
Features" in 2001. It is a machine learning based approach where a cascade function is
trained from a lot of positive and negative images. It is then used to detect objects in other
images.
The algorithm has four stages:
1. Haar Feature Selection
2. Creating Integral Images
3. Ad boost Training
4. Cascading Classifiers
Let’s take face detection as an example. Initially, the algorithm needs a lot of positive
images of faces and negative images without faces to train the classifier. Then we need to
extract features from it.
First step is to collect the Haar Features. A Haar feature considers adjacent rectangular
regions at a specific location in a detection window, sums up the pixel intensities in each
region and calculates the difference between these sums.
Integral Images are used to make this super-fast. But among all these features we
calculated, most of them are irrelevant. For example, consider the image below. Top row
shows two good features. The first feature selected seems to focus on the property that the
region of the eyes is often darker than the region of the nose and cheeks.
3.7.3 Ada boost Training
So how do we select the best features out of 160000+ features? This is accomplished
using a concept called Adaboost which both selects the best features and trains the
classifiers that use them. This algorithm constructs a “strong” classifier as a linear
combination of weighted simple “weak” classifiers. The process is as follows.
During the detection phase, a window of the target size is moved over the input image,
and for each subsection of the image and Haar features are calculated. You can see this in
action in the video below. Because each Haar feature is only a "weak classifier" (its
detection quality is slightly better than random guessing) a large number of Haar features
are necessary to describe an object with sufficient accuracy and are therefore organized into
cascade classifiers to form a strong classifier.
a highly accurate classifier by taking a weighted average of the decisions made by the weak
learners.
Each stage of the classifier labels the region defined by the current location of the
sliding window as either positive or negative. Positive indicates that an object was found
and negative indicates no objects were found. If the label is negative, the classification of
this region is complete, and the detector slides the window to the next location.
Siri, Alexa, Google Now are some of the popular examples of virtual personal
assistants. As the name suggests, they assist in finding information, when asked over voice.
All you need to do is activate them and ask “What is my schedule for today?”, “What are
the flights from Germany to London”, or similar questions. For answering, your personal
assistant looks out for the information, recalls your related queries, or send a command to
other resources (like phone apps) to collect info. You can even instruct assistants for certain
tasks like “Set an alarm for 6 AM next morning”, “Remind me to visit Visa Office day after
tomorrow”. Machine learning is an important part of these personal assistants as they collect
and refine the information on the basis of your previous involvement with them. Virtual
Assistants are integrated to a variety of platforms. For example: Smart Speakers: Amazon
Echo and Google Home Smartphones: Samsung Bixby on Samsung S8 Mobile Apps:
Google All
We all have been using GPS navigation services. While we do that, our current
locations and velocities are being saved at a central server for managing traffic. This data is
then used to build a map of current traffic. While this helps in preventing the traffic and
does congestion analysis, the underlying problem is that there are a smaller number of cars
that are equipped with GPS.. When sharing these services, how do they minimize the
detours? The answer is machine learning. Jeff Schneider, the engineering lead at Uber ATC
reveals in an interview that they use ML to define price surge hours by predicting the rider
demand. In the entire cycle of the services, ML is playing a major role.
Imagine a single person monitoring multiple video cameras! Certainly, a difficult job to
do and boring as well. This is why the idea of training computers to do this job makes sense.
The video surveillance system nowadays is powered by AI that makes it possible to detect crime
before they happen. They track unusual behavior of people like standing motionless for a long
time, stumbling, or napping on benches etc. The system can thus give an alert to human
attendants, which can ultimately help to avoid mishaps.
1. Detection and classification of the vehicles and other objects of interest (e.g., toll
payment box).
2. Predicting the scene class based on the spatial relationships among the vehicles of
interest and contextually important objects.
CHAPTER-4
DESIGN METHODOLOGY
SSD is a popular object detection algorithm that was developed in Google Inc.. It is based
on the VGG-16 architecture. Hence SSD is simple and easier to implement.
A set of default boxes is made to pass over several feature maps in a convolutional
manner. If an object detected is one among the object classifiers during prediction, then a
score is generated. The object shape is adjusted to match the localization box. For each
box, shape offsets and confidence level are predicted. During training, default boxes are
matched to the ground truth boxes. The fully connected layers are discarded by SSD
architecture.
The number of parameters is reduced significantly by this model through the use of
depth wise separable convolutions, when compared to that done by the network with normal
convolutions having the same depth in the networks. The reduction of parameters results in
the formation of light weight neural network.
This technique estimates and calculates the optical flow field with algorithm used for
optical flow. A local mean algorithm is used then to enhance it. To filter noise a self-
adaptive algorithm takes place. It contains a wide adaptation to the number and size of the
objects and helpful in avoiding time consuming and complicated preprocessing methods.
Background Subtraction
Background subtraction (BS) method is a rapid method of localizing objects in
motion from a video captured by a stationary camera. This forms the primary step of a multi-
stage vision system. This type of process separates out background from the foreground
object in sequence in images
Two ways in which the object can be tracked in the above example are: (1)-Tracking in a
sequence of detection. In this method a CCTV video sequence of a traffic which is in motion
takes place. Suppose someone wants to track a car or person’s movement here, he will take
different images or frames at different interval of time. With the help of these images one can
target the object like a car or person. Then, by checking how my object has moved in different
frames of the video, I can track it. Velocity of the object can be calculated by verifying the
object’s displacement with the help of different frames taken at different interval of time. This
method is actually a flaw where one is not tracking but detecting the object at different intervals
of time. Improved method is “detection with dynamics”. In this method estimation of car’s
trajectory or movement takes place. By checking it’s position at a particular time ‘t’ and
estimating its position at another time interval let’s say ‘t+10’.From this actual image of car at
‘t+10’ time can be proposed with the help of estimation.
BLOCK DIAGRAM
PyCharm/NumPy
Web Camera System/Laptop and Open CV
Data
Since Alex Net has stormed the research world in 2012 ImageNet on a large scale
visual recognition challenge, for detection in-depth learning, far exceeding the most traditional
methods of artificial vision used in literature. In artificial vision, the neural convolution
networks are distinguished in the classification of images. Fig. 1. Basic block diagram of
detection and Tracking Fig. 1 shows the basic block diagram of detection and tracking. In this
paper, an SSD and Mobile Nets based algorithms are implemented for detection and tracking in
python environment. Object detection involves detecting region of interest of object from given
class of image. Different methods are –Frame differencing, Optical flow, Background
subtraction. This is a method of detecting and locating an object which is in motion with the
help of a camera. Detection and tracking algorithms are described by extracting the features of
image and video for security applications [3] [7] [8].
The Arduino reference design can use an Atmega8, 168, or 328, Current models use an
ATmega328, but an Atmega8 is shown in the schematic for reference. The pin configuration is
identical on all three processors.
4.1.3 Specifications
Microcontroller ESP8266
Operating Voltage 5V
EEPROM 1 KB (ESP8266)
4.1.4 Power
The Arduino Uno can be powered via the USB connection or with an external power
supply. The power source is selected automatically.
External (non-USB) power can come either from an AC-to-DC adapter (wall- wart) or
battery. The adapter can be connected by plugging a 2.1mm center-positive plug into the
board's power jack. Leads from a battery can be inserted in the Gnd and Vin pin headers of the
POWER connector. The board can operate on an external supply of 6 to 20 volts. If supplied
with less than 7V, however, the 5V pin may supply less than five volts and the board may be
unstable. If using more than 12V, the voltage regulator may overheat and damage the board.
The recommended range is 7to 12 volts. The power pins are as follows VIN.
5V.This pin outputs a regulated 5V from the regulator on the board. The board can be supplied
with power either from the DC power jack (7 - 12V), the USB connector (5V), or the VIN pin
of the board (7-12V). Supplying voltage via the 5V or 3.3V pins bypasses the regulator, and can
damage your board. We don't advise it.
3V3. A 3.3 volt supply generated by the on-board regulator. Maximum current drawis 50 mA.
GND. Ground pins.
IOREF. This pin on the Arduino board provides the voltage reference with which the
microcontroller operates. A properly configured shield can read the IOREF pin voltage and
select the appropriate power source or enable voltage translators on the outputs for working
with the 5V or 3.3V.
4.1.5 Memory
The ESP8266 has 64 KB (with 0.5 KB used for the boot loader). It also has 2 KB of
SRAM and 1 KB of EEPROM (which can be read and written with the EEPROM library).
Each of the 11 digital pins on the Uno can be used as an input or output, using pin
Mode(), digital Write(), and digital Read() functions. They operate at 5 volts. Each pin can
provide or receive a maximum of 40 mA and has an internal pull-up resistor (disconnected by
default) of 20-50 kOhms. In addition, some pins havespecialized functions:
Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data. These
pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL Serial chip.
External Interrupts: 2 and 3. These pins can be configured to trigger an interrupt on a low
value, a rising or falling edge, or a change in value. See the attach Interrupt() function for
details.
PWM: 3, 5, 6, 9, 10, and 11. Provide 8-bit PWM output with the analog Write()function.
SPI: 10 (SS), 11 (MOSI), 12 (MISO), 13 (SCK). These pins support SPI
LED: 13. There is a built-in LED connected to digital pin 13. When the pin is HIGHvalue, the
LED is on, when the pin is LOW, it’s off.
The Uno has 6 analog inputs, labeled A0 through A5, each of which provide 10 bits of
resolution (i.e. 1024 different values). By default they measure from ground to 5 volts, though is
it possible to change the upper end of their range using the AREF pin and the analog
Reference() function. Additionally, some pins have specialized functionality:
TWI: A4 or SDA pin and A5 or SCL pin. Support TWI communication using theWire
library.
There are a couple of other pins on the board:
AREF. Reference voltage for the analog inputs. Used with analog Reference().
Reset. Bring this line LOW to reset the microcontroller. Typically used to add a resetbutton to
shields which block the one on the board.
4.1.7 Communication
The Arduino nano has a number of facilities for communicating with a computer,
another Arduino, or other microcontrollers. The ATmega328 provides UART TTL (5V) serial
communication, which is available on digital pins 0 (RX) and 1 (TX). An Atmega16U2 on the
board channels this serial communication over USB and appears as a virtual com port to
software on the computer. The ‘16U2 firmware uses the standard USB COM drivers, and no
external driver is needed. However, on Windows, a.inf file is required. The Arduino software
includes a serial monitor which allows simple textual data to be sent to and from the Arduino
board. The RX and TX LEDs on the board will flash when data is being transmitted via the
USB-to-serial chip and USB connection to the computer (but not for serial communication on
pins 0 and 1).
A Software Serial library allows for serial communication on any of the Uno's digital
pins.The Atmega328 also supports I2C (TWI) and SPI communication. The Arduino software
includes a Wire library to simplify use of the I2C bus; see the documentation for details. For
SPI communication, use the SPI library.
4.1.8 Programming
The Arduino Uno can be programmed with the Arduino software.The ATmega328 on
the Arduino Nano comes preburned with a boot loader that allows you to upload new code to
it without the use of an external hardware programmer. It communicates using the original
STK500 protocol (reference, C header files).You can also bypass the boot loader and program
the microcontroller through the ICSP (In- Circuit Serial Programming) header; see these
instructions for details.
The ATmega16U2 (or 8U2 In the rev1”and ’ev2 boards) firmware source code is
available The Atmega16U2/8U2 is loaded with a DFU boot loader, which can be activated
by:On Rev1 boards: connecting the solder jumper on the back of the board (near the map of
Italy) and then resetting the 8U2.
On Rev2 or later boards: there is a resistor that pulling the 8U2/16U2 HWB line to
ground, making it easier to put into DFU mode. You can then use Atmel’s FLIP software
(Windows) or the DFU programmer (Mac OS X and Linux) to load a new firmware. Or you
can use the ISP header with an external programmer (overwriting the DFU boot loader). See
this user-contributed tutorial for more information.
Rather than requiring a physical press of the reset button before an upload, the Arduino
Nano is designed in a way that allows it to be reset by software running on a connected
computer. One of the hardware flow control lines (DTR) of theATmega8U2/16U2 is connected
to the reset line of the ATmega328 via a 100 nano farad capacitor. When this line is asserted
(taken low), the reset line drops long enough to reset the chip. The Arduino software uses this
capability to allow you to upload code by simply pressing the upload button in the Arduino
environment. This means that the boot loader can have a shorter timeout, as the lowering of
DTR can be well-coordinated with the start of the upload.
This setup has other implications. When the Nano is connected to either a computer
running Mac OS X or Linux, it resets each time a connection is made to it from software (via
USB). For the following half-second or so, the boot loader is running on the Nano. While it is
programmed to ignore malformed data (i.e. anything besides an upload of new code), it will
intercept the first few bytes of data sent to the board after a connection is opened. If a sketch
running on the board receives one-time configuration or other data when it first starts, make sure
that the software with which it communicates waits a second after opening the connection and
before sending this data. The Nano contains a trace that can be cut to disable the auto-reset. The
pads on either side of the trace can be soldered together to re-enable it. It's labeled "RESET- EN".
You may also be able to disable the auto-reset by connecting a 110 ohm resistor from 5V to the
reset line.
The Arduino Nano has a resettable polyfuse that protects your computer's USB ports
from shorts and overcurrent. Although most computers provide their own internal protection, the
fuse provides an extra layer of protection. If more than 500 mA is applied to the USB port, the
fuse will automatically break the connection until the short or overload is removed.
EM-18 RFID reader is one of the commonly used RFID reader to read 125KHz tags. It
features low cost, low power consumption, small form factor and easy to use. It provides both
UART and Wiegand26 output formats. It can be directly interfaced with microcontrollers using
UART and with PC using an RS232 converter.
Working of EM-18 RFID moduleThe module radiates 125KHz through its coils and when a
125KHz passive RFID tag is brought into this field it will get energized from this field. These
passive RFID tags mostly consist of CMOS IC EM4102 which can get enough power for its
working from the field generated by the reader.
By changing the modulation current through the coils, tag will send back the information
contained in the factory programmed memory a
Block Diagram
1 VCC 5V
2 GND Ground
4 ANT No Use
5 ANT No Use
8 D1 WIEGAND Data 1
9 D0 WIEGAND Data 0
Fig. 4.2.6: Interfacing EM-18 RFID Reader Module with Node MCU –Circuit Diagram
Breadboard Wiring.
Fig. 4.2.7: Interfacing EM-18 RFID Reader with Node MCU – On Board
A servo motor is an electrical device which can push or rotate an object with great
precision. If you want to rotate and object at some specific angles or distance, then you use servo
motor. It is just made up of simple motor which run through servo mechanism. If motor is used
is DC powered then it is called DC servo motor, and if it is AC powered motor then it is called
AC servo motor. Doe to these features they are being used in many applications like toy car, RC
helicopters and planes, Robotics, Machine etc.
Servo motors are rated in kg/cm (kilogram per centimeter) most hobby servo motors are rated at
3kg/cm or 6kg/cm or 12kg/cm. This kg/cm tells you how much weight your servo motor can lift
at a particular distance. For example: A 6kg/cm Servo motor should be able to lift 6kg if the load
is suspended 1cm away from the motors shaft, the greater the distance the lesser the weight
carrying capacity.
The position of a servo motor is decided by electrical pulse and its circuitry is
placed beside the motor.
Servo Mechanism
1. Controlled device
2. Output sensor
3. Feedback system
Fig. 4.3: Servo Motor
It is a closed loop system where it uses positive feedback system to control motion and final
position of the shaft. Here the device is controlled by a feedback signal generated by comparing
output signal and reference input signal.
Here reference input signal is compared to reference output signal and the third signal is
produces by feedback system. And this third signal acts as input signal to control device. This
signal is present as long as feedback signal is generated or there is difference between reference
input signal and reference output signal.
All motors have three wires coming out of them. Out of which two will be used for
Supply (positive and negative) and one will be used for the signal that is to be sent from the
MCU.
Servo motor is controlled by PWM (Pulse with Modulation) which is provided by the
control wires. There is a minimum pulse, a maximum pulse and a repetition rate. Servo motor
can turn 90 degree from either direction form its neutral position. For example, a 1.5ms pulse
will make the motor turn to the 90° position, such as if pulse is shorter than 1.5ms shaft moves to
0° and if it is longer than 1.5ms than it will turn the servo to 180°.
Servo motor works on PWM (Pulse width modulation) principle, means its angle of
rotation is controlled by the duration of applied pulse to its Control PIN. Basically servo motor is
made up of DC motor which is controlled by a variable resistor (potentiometer) and some
gears. High speed force of DC motor is converted into torque by Gears. We know that WORK=
FORCE X DISTANCE, in DC motor Force is less and distance (speed) is high and in Servo,
force is High and distance is less. Potentiometer is connected to the output shaft of the Servo, to
calculate the angle and stop the DC motor on required angle he servo motor is most commonly
used for high technology devices in the industrial applications like automation technology.
Thus this blog discusses the definition, types, mechanism, principle, working, controlling,
and lastly the applications of a servo machine. A servo motor is a rotary actuator or a motor that
allows for a precise control in terms of the angular position, acceleration, and velocity. Basically
it has certain capabilities that a regular motor does not have. Consequently it makes use of a
regular motor and pairs it with a sensor for position feedback .
Principle of working :
Servo motor works on the PWM ( Pulse Width Modulation ) principle, which means its
angle of rotation is controlled by the duration of pulse applied to its control PIN. Basically servo
motor is made up of DC motor which is controlled by a variable resistor (potentiometer) and
some gears.
Mechanism of servomotor :
Working of servomotors :
Servo motors control position and speed very precisely. Now a potentiometer can sense
the mechanical position of the shaft. Hence it couples with the motor shaft through gears. The
current position of the shaft is converted into electrical signal by potentiometer, and is compared
with the command input signal. In modern servo motors, electronic encoders or sensors sense the
position of the shaft .
We give command input according to the position of shaft . If the feedback signal differs
from the given input, an error signal alerts the user. We amplify this error signal and apply as the
input to the motor, hence the motor rotates. And when the shaft reaches to the require position ,
error signal become zero , and hence the motor stays standstill holding the position.
The command input is in form of electrical pulses . As the actual input to the motor is the
difference between feedback signal ( current position ) and required signal, hence speed of the
motor is proportional to the difference between the current position and required position . The
amount of power require by the motor is proportional to the distance it needs to travel .
Controlling of servomotors :
Usually a servomotor turns 90 degree in either direction hence maximum movement can
be 180 degree . However a normal servo motor cannot rotate any further to a build in mechanical
stop.
We take three wires are out of a servo : positive , ground and control wire. A servo motor
is control by sending a pulse width modulated(PWM) signal through the control wire . A pulse is
sent every 20 milliseconds. Width of the pulses determine the position of the shaft .
for example ,
A pulse of 1ms will move the shaft anticlockwise at -90 degree , a pulse of 1.5ms will
move the shaft at the neutral position that is 0 degree and a pulse of 2ms will move shaft
clockwise at +90 degree.
Applications :
1. Robotics : At every joint of the robot, we connect a servomotor. Thus giving the robot arm its
precise angle.
2. Conveyor belts : servo motors move , stop , and start conveyor belts carrying product along to
various stages , for example , in product packaging/ bottling, and labelling .
3. Camera auto focus : A highly precise servo motor build into the camera corrects a camera lens
to sharpen out of focus images.
4. Solar tracking system : Servo motors adjust the angle of solar panels throughout the day and
hence each panel continues to face the sun which results in harnessing maximum energy from
sunup to sundown .
Appearing as practical electronic components in 1962, the earliest LEDs emitted low-
intensity infrared light. Infrared LEDs are still frequently used as transmitting elements in remote-
control circuits, such as those in remote controls for a wide variety of consumer electronics. The
first visible-light LEDs were of low intensity and limited to red. Modern LEDs are available across
the visible, ultraviolet, and infrared wavelengths, with very high brightness.
Early LEDs were often used as indicator lamps for electronic devices, replacing small
incandescent bulbs. They were soon packaged into numeric readouts in the form of seven-segment
displays and were commonly seen in digital clocks. Recent developments have produced LEDs
suitable for environmental and task lighting. LEDs have led to new displays and sensors, while
their high switching rates are useful in advanced communications technology.
LEDs have many advantages over incandescent light sources, including lower energy
consumption, longer lifetime, improved physical robustness, smaller size, and faster switching.
Light-emitting diodes are used in applications as diverse as aviation lighting, automotive
headlamps, advertising, general lighting, traffic signals, camera flashes, lighted wallpaper and
medical devices.[10] They are also significantly more energy efficient and, arguably, have fewer
environmental concerns linked to their disposal.[11][12]
Unlike a laser, the color of light emitted from an LED is neither coherent nor
monochromatic, but the spectrum is narrow with respect to human vision, and for most purposes
the light from a simple diode element can be regarded as functionally monochromatic LED
development began with infrared and red devices made with gallium arsenide. Advances in
materials science have enabled making devices with ever-shorter wavelengths, emitting light in a
variety of colors.
LEDs are usually built on an n-type substrate, with an electrode attached to the p-type layer
deposited on its surface. P-type substrates, while less common, occur as well. Many commercial
LEDs, especially GaN/InGaN, also use sapphire substrate.
Typical indicator LEDs are designed to operate with no more than 30–60 milliwatts (mW)
of electrical power. Around 1999, Philips Lumileds introduced power LEDs capable of continuous
use at one watt. These LEDs used much larger semiconductor die sizes to handle the large power
inputs. Also, the semiconductor dies were mounted onto metal slugs to allow for greater heat
dissipation from the LED die.
One of the key advantages of LED-based lighting sources is high luminous efficacy. White
LEDs quickly matched and overtook the efficacy of standard incandescent lighting systems. In
2002, Lumileds made five-watt LEDs available with luminous efficacy of 18–22 lumens per watt
(lm/W). For comparison, a conventional incandescent light bulb of 60–100 watts emits around
15 lm/W, and standard fluorescent lights emit up to 100 lm/W.
In September 2003, a new type of blue LED was demonstrated by Cree. This produced a
commercially packaged white light giving 65 lm/W at 20 mA, becoming the brightest white LED
commercially available at the time, and more than four times as efficient as standard
incandescents. In 2006, they demonstrated a prototype with a record white LED luminous efficacy
of 131 lm/W at 20 mA. Nichia Corporation has developed a white LED with luminous efficacy of
150 lm/W at a forward current of 20 mA.[80] Cree's XLamp XM-L LEDs, commercially available
in 2011, produce 100 lm/W at their full power of 10 W, and up to 160 lm/W at around 2 W input
power. In 2012, Cree announced a white LED giving 254 lm/W,[81] and 303 lm/W in March
2014.[82] Practical general lighting needs high-power LEDs, of one watt or more. Typical operating
currents for such devices begin at 350 mA.
United States Department of Energy (DOE) testing of commercial LED lamps designed to
replace incandescent lamps or CFLs showed that average efficacy was still about 46 lm/W in 2009
(tested performance ranged from 17 lm/W to 79 lm/W).
4.5 PIEZO-BUZZER
The word "buzzer" comes from the rasping noise that buzzers made when they were
electromechanical devices, operated from stepped-down AC line voltage at 50 or 60 cycles. Other
sounds commonly used to indicate that a button has been pressed are a ring or a beep. Some
systems, such as the one used on Jeopardy!, make no noise at all, instead using light.
Specifications:
Rated Voltage:
A piezo buzzer is driven by square waves (V p-p).Operating Voltage: For normal
operating. But it is not guaranteed to make the minimum SPL under the rated voltage.
Consumption Current:
The current is stably consumed under the regular operation. However, it normally takes
three times of current at the moment of starting to work.
DEPT OF ECE TKREC 38
Automatic Toll Collection System Using RFID With Vehicle Classification Using Convolutional Neural Network
Capacitance:
A piezo buzzer can make higher SPL with higher capacitance, but it consumes more
electricity.
Sound Output:
The sound output is measured by decibel meter. Applying rated voltage and square waves,
and the distance of 10 cm.
Rated Frequency:
A buzzer can make sound on any frequencies, but we suggest that the highest and the most
stable SPL come from the rated frequency.
Operating Temp.:
Keep working well between -30℃ and +70℃.
CHAPTER-5
SOFTWARE DESCRIPTION
5.1 creating project in arduino 1.7.11 version
5.1.1 ARDUINO IDE INSTALLATION:
In thise we will get know of the process of installation of Arduino IDE and connecting
Arduino uno to Arduino IDE.
Step 1
First we must have our Arduino board (we can choose our favorite board) and a USB cable.
In case we use Adriana UNO, Arduino Duemilanove, Nano, Arduino Mega 2560, or Diecimila, we
will need a standard USB cable (A plug to B plug), In case we use Arduino Nano, we will need an
A to Mini-B cable.
Step 2 − Download Arduino IDE Software. We can get different versions of Arduino IDE from
the Download page on the Arduino Official website. We must select wer software, which is
compatible with we operating system (Windows, IOS, or Linux).
After wear file download is complete, unzip the file.
USB connection. The power source is selected with a jumper, a small piece of plastic that fits onto
two of the three pins between the USB and power jacks.
Check that it is on the two pins closest to the USB port.
Connect the Arduino board to wer computer using the USB cable. The green power LED
(labeled PWR) should glow.
Step 4 − Launch Arduino IDE.
After our Arduino IDE software is downloaded, we need to unzip the folder. Inside the folder, we
can find the application icon with an infinity label (application.exe).
Double click the icon to start the IDE.
Step 5 − Open our first project.
Once the software starts, we have two options 1)Create a new project
To avoid any error while uploading wear program to the board, we must select the correct
Arduino board name, which matches with the board connected to wer computer.
Go to Tools → Board and select wear board.
Here, we have selected Arduino Uno board according to our tutorial, but we must select
the name matching the board that we are using.
Step 7 − Select the serial port.
Select the serial device of the Arduino board. Go to Tools → Serial Port menu. This is likely to be
COM3 or higher (COM1 and COM2 are usually reserved for hardware serial ports). To find out,
we can disconnect the Arduino board and re-open the menu, the entry that disappears should be of
the Arduino board. Reconnect the board and select that serial port.
Step 8 − Upload the program to the board.
Before explaining how we can upload our program to the board, we must demonstrate the function
of each symbol appearing in the Arduino IDE toolbar.
4. Click on Next.
5. Select installation type “Just Me” unless you’re installing it for all users (which require
Windows Administrator privileges) and click on Next.
6.Select a destination folder to install Anaconda and click the Next button.
If you want to watch the packages Anaconda is installing, click on Show Details.
4. After a successful installation you will see the “Thanks for installing Anaconda” dialog box.
OpenCV
OpenCV is a cross-platform library using which we can develop real-time computer
vision applications. It mainly focuses on image processing, video capture and analysis
including features like face detection and object detection.
Let’s start the chapter by defining the term "Computer Vision".
Computer Vision
Computer Vision can be defined as a discipline that explains how to reconstruct,
interrupt, and understand a 3D scene from its 2D images, in terms of the properties of the
structure present in the scene. It deals with modeling and replicating human vision using
computer software and hardware.
Computer Vision overlaps significantly with the following fields:
• Image Processing: It focuses on image manipulation.
• Pattern Recognition: It explains various techniques to classify patterns.
• Photogrammetry: It is concerned with obtaining accurate measurements from images.
the Java library of OpenCV, this module is included as a package with the name
org.opencv.videoio.
Calib3d
This module includes algorithms regarding basic multiple-view geometry algorithms,
single and stereo camera calibration, object pose estimation, stereo correspondence and
elements of 3D reconstruction. In the Java library of OpenCV, this module is included as a
package with the name org.opencv.calib3d.
features2d
This module includes the concepts of feature detection and description. In the Java
library of OpenCV, this module is included as a package with the name org.opencv.features2d.
Objdetect
This module includes the detection of objects and instances of the predefined classes
such as faces, eyes, mugs, people, cars, etc. In the Java library of OpenCV, this module is
included as a package with the name org.opencv.objdetect.
Highgui
This is an easy-to-use interface with simple UI capabilities. In the Java library of
OpenCV, the features of this module is included in two different packages namely,
org.opencv.imgcodecs and org.opencv.videoio.
NumPy
NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting
of multidimensional array objects and a collection of routines for processing of array.
Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray
was also developed, having some additional functionalities. In 2005, Travis Oliphant created
NumPy package by incorporating the features of Numarray into Numeric package. There are
many contributors to this open source project.
• Operations related to linear algebra. NumPy has in-built functions for linear algebra and
NumPy is often used along with packages like SciPy (Scientific Python) and
Mat−plotlib (plotting library). This combination is widely used as a replacement for MatLab, a
popular platform for technical computing. However, Python alternative to MatLab is now seen
as a more modern and complete programming language.
The best way to enable NumPy is to use an installable binary package specific to your
operating system. These binaries contain full SciPy stack (inclusive of NumPy, SciPy,
matplotlib, IPython, SymPy and nose packages along with core Python).
Windows
Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy
stack. Itis also available for Linux and Mac.
Canopy (https://www.enthought.com/products/canopy/) is available as free as well as
commercial distribution with full SciPy stack for Windows, Linux and Mac.
Python (x,y): It is a free Python distribution with SciPy stack and Spyder IDE for Windows OS.
(Downloadable from http://python-xy.github.io/)
Linux
Package managers of respective Linux distributions are used to install one or more
packages in SciPy stack.
For Ubuntu
For Fedora
Building from Source
Core Python (2.6.x, 2.7.x and 3.2.x onwards) must be installed with distutils and zlib
module should be enabled.
GNU gcc (4.2 and above) C compiler must be available.
To test whether NumPy module is properly installed, try to import it from Python prompt.
import numpy
import numpy as np
clear programming on both little and huge scales. Van Rossum drove the language network
until venturing down as pioneer in July 2018.
Python includes a dynamic kind framework and programmed memory the board. It
underpins different programming ideal models, including object-oriented, basic, useful and
procedural, and has a huge and complete standard library.
Python translators are accessible for some operating frameworks. CPython, the
reference execution of Python, is open source programming and has a network based
improvement model, as do about the majority of Python's different usage. Python is a
universally useful translated, intelligent, object-oriented, and high-level programming language.
It was made by Guido van Rossum during 1985-1990. Like Perl, Python source code is
additionally accessible under the GNU General Public License (GPL).
Python is a simple to adapt, ground-breaking programming language. It has proficient
high-level information structures and a straightforward, however successful way to deal with
article situated programming. Python's exquisite sentence structure and dynamic composing,
together with its deciphered nature, make it a perfect language for scripting and quick
application improvement in numerous regions on generally stages.
The Python interpreter is effectively reached out with new capacities and information
types executed in C or C++ (or different languages callable from C). Python is likewise
appropriate as an augmentation language for adaptable applications.
5.2.1 WRITING A PYTHON PROGRAM
Python programs must be written with a particular structure. The syntax must be
correct, or the interpreter will generate error messages and not execute the program. This
section introduces Python by providing a simple example program.
Listing 1.1 (simple.py) is one of the simplest Python programs that does something:
We will consider two ways in which we can run Listing 1.1 (simple.py):
1. Enter the program directly into an IDLE’s interactive shell and
2. Enter the program into an IDLE’s editor, save it, and run it.
IDLE editor:
Inert has a worked in editor. From the IDLE menu, select New Window, as appeared in
Figure 1.4. Type the content as appeared in Listing 1.1 (simple.py) into the editor. Figure 1.5
demonstrates the subsequent editor window with the content of the simple Python program.
You can spare your program utilizing the Save alternative in the File menu as appeared in
Figure 1.6. Spare the code to a record named simple.py. The genuine name of the record is
unimportant, however the name "simple" precisely portrays the idea of this program. The
augmentation .py is the expansion utilized for Python source code. We can run the program
from inside the IDLE editor by squeezing the F5 capacity key or from the editor's Run menu:
Run→Run Module. The yield shows up in the IDLE intuitive shell window.
Perhaps the quickest check to see whether command line editing is supported is typing
Control-P to the first Python prompt you get. If it beeps, you have command line editing; see
Appendix Interactive Input Editing and History Substitution for an introduction to the keys. If
nothing appears to happen, or if ^P is echoed, command line editing isn’t available; you’ll only
be able to use backspace to remove characters from the current line. The interpreter operates
somewhat like the Unix shell: when called with standard input connected to a tty device, it
reads and executes commands interactively; when called with a file name argument or with a
file as standard input, it reads and executes a script from that file.
A second way of starting the interpreter is python -c command [arg] ..., which executes
the statement(s) in command, analogous to the shell’s -c option. Since Python statements often
contain spaces or other characters that are special to the shell, it is usually advised to quote
command in its entirety with single quotes. Some Python modules are also useful as scripts.
These can be invoked using python -m module [arg] ..., which executes the source file for
module as if you had spelled out its full name on the command line. When a script file is used,
it is sometimes useful to be able to run the script and enter interactive mode afterwards. This
can be done by passing -i before the script.
Argument Passing
When known to the interpreter, the script name and additional arguments thereafter are
turned into a list of strings and assigned to the argv variable in the sys module. You can access
this list by executing import sys. The length of the list is at least one; when no script and no
arguments are given, sys.argv[0] is an empty string. When the script name is given as '-'
(meaning standard input), sys.argv[0] is set to '-'. When -c command is used, sys.argv[0] is set
to '-c'. When -m module is used, sys.argv[0] is set to the full name of the located module.
Options found after -c command or -m module are not consumed by the Python interpreter’s
option processing but left in sys.argv for the command or module to handle.
Interactive Mode
When commands are read from a tty, the interpreter is said to be in interactive mode. In
this mode it prompts for the next command with the primary prompt, usually three greater-than
signs (>>>); for continuation lines it prompts with the secondary prompt, by default three dots
(...). The interpreter prints a welcome message stating its version number and a copyright notice
before printing the first prompt
By default, Python source files are treated as encoded in UTF-8. In that encoding,
characters of most languages in the world can be used simultaneously in string literals,
identifiers and comments — although the standard library only uses ASCII characters for
identifiers, a convention that any portable code should follow. To display all these characters
properly, your editor must recognize that the file is UTF-8, and it must use a font that supports
all the characters in the file.
a. INSTALLING PYTHON
Go to www.python.org and download the latest version of Python (version 3.5 as of this
writing). It should be painless to install. If you have a Mac or Linux, you may already have
Python on your computer, though it may be an older version. If it is version 2.7 or earlier, then
you should install the latest version, as many of the programs in this book will not work
correctly on older versions.
b. IDLE
IDLE is a simple integrated development environment (IDE) that comes with Python. It’s a
program that allows you to type in your programs and run them. There are other IDEs for
Python, but for now I would suggest sticking with IDLE as it is simple to use. You can find
IDLE in the Python 3.4 folder on your computer.
When you first start IDLE, it starts up in the shell, which is an interactive window where
you can type in Python code and see the output in the same window. I often use the shell in
place of my calculator or to try out small pieces of code. But most of the time you will want to
open up a new window and type the program in there.
Note At least on Windows, if you click on a Python file on your desktop, your system will
run the program, but not show the code, which is probably not what you want. Instead, if you
right-click on the file, there should be an option called Edit with Idle. To edit an existing
Python file, either do that or start up IDLE and open the file through the File menu.
CHAPTER-6
RESULTS AND DISCUSSION
The Results obtained are based on the live and continues inputs given to it. The below
figures represent a sample image of the result obtained in which it signifies the description of
vehicle classified as bus/car/cycle e.c.t along with the amount associated with it. The results
obtained can be reliable since it also differentiates between vehicles and others moving
bodies.
CHAPTER-7
CONCLUSION AND FUTURE WORK
CONCLUSION
FUTURE SCOPE:
Machines are used in every part of human life. Machines work according to us but in
today’s world, we work according to machines. The rush to soar high is immense. Hence,
machines are important and so are the parts of them. If the parts do not fit well a machine cannot
work properly. The dimensions of the objects sure make a great impact. This AI IOT based
project will help in measuring the dimensions in real-time. It is convenient and easy to use. It
also gives accuracy and assurance of the manufactured product. As it is a one-time investment it
surely has a great future scope
CHALLENGES:
The main purpose is to recognize a specific object in real time from a large number of
objects. Most recognition systems are poorly scalable with many recognizable objects.
Computational cost rises as the number of objects increases. Comparing and querying images
using color, texture, and shape are not enough because two objects might have same attributes.
Designing a recognition system with abilities to work in the dynamic environment and behave
like a human is difficult. Some main challenges to design object recognition system are lighting,
dynamic background, the presence of shadow, the motion of the camera, the speed of the
moving objects, and intermittent object motion weather conditions etc.
ADVANTAGES:
1. It is economically cheep.
2. It is fast.
3. It reduces man error and increases proficiency.
4. It can be used easily.
5. Less error is directly proportional to more profit.
6. Not expensive that is it is low cost only a webcam is required.
APPLICATIONS:
➢ Mainly used in toll collection of vehicles and also used in other sectors such as :
1 It’s is used in defence.
2 Used in laboratories.
3 Used in manufacturing industries.
4 Used in textile industry.
5 Used in aerospace system.
REFERENCES
2. J. Fang, Y. Zhou, Y. Yu, S. Du, "Fine-grained vehicle model recognition using a coarse-
to-fine convolutional neural network architecture", IEEE Trans. Intell. Transp. Syst., vol.
18, no. 7, pp. 1782-1792, 2017.
5. Varsha,Amit Kumar Mishra, Binita Pareek ”Automated Approach for Toll Detection Using
Circular Hough Transform and Scalar Sharpness Index”2019 4th International Conference on
Internet of Things: Smart Innovation and Usages (IoT-SIU)
APPENDICES
A. SOURCE CODE
import cv2
thres = 0.45 # Threshold to detect object
import time
import serial
arduino = serial.Serial("COM3",9600)
time.sleep(2)
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
classNames = []
classFile = 'coco.names'
with open(classFile,'rt') as f:
classNames = f.read().rstrip('\n').split('\n')
configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
weightsPath = 'frozen_inference_graph.pb'
net = cv2.dnn_DetectionModel(weightsPath,configPath)
net.setInputSize(320,320)
net.setInputScale(1.0/ 127.5)
net.setInputMean(127.5)
net.setInputSwapRB(True)
while True:
success,img = cap.read()
classIds, confs, bbox = net.detect(img, confThreshold = thres)
if len(classIds) != 0:
for classId, confidence,box in zip(classIds.flatten(),confs.flatten(),bbox):
cv2.rectangle(img,box,color=(0,255,0),thickness=2)
if(classNames[classId-1].upper() == "CAR"):
print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("1 data sent")
arduino.write(b'1')
time.sleep(1)
if(classNames[classId-1].upper() == "BUS"):
print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("2 data sent")
arduino.write(b'2')
time.sleep(1)
if(classNames[classId-1].upper() == "TRUCK"):
print("vehicle Detected",classNames[classId-1].upper())
cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)
cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)
print("2 data sent")
arduino.write(b'2')
time.sleep(1)
else:
print("0 data sent")
arduino.write(b'0')
cv2.imshow("Original",img)
cv2.waitKey(1)
#3 Car
#6 Bus
#8 truck