0% found this document useful (0 votes)
11 views41 pages

Final Word Project

Uploaded by

Muthu Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views41 pages

Final Word Project

Uploaded by

Muthu Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

NUMBER PLATE RECOGNIATION IN BOTH IMAGE AND VIDEO

USING PYTHON

Submitted in partial fulfillment of the requirements for the award of

Bachelor of Engineering Degree in

Computer Science and Engineering


By
Monish K.B (Reg. No. 37110454)

NALLANI KAMALAKAR (Reg. No. 37110487)


-

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING SCHOOL OF COMPUTING

SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY (DEEMED TO BE
UNIVERSITY)
Accredited with Grade “A” by NAAC
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI - 600 119
i
MARCH - 2021

ii
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A” by NAAC
(Established under Section 3 of UGC Act, 1956)
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI– 600119

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

BONAFIDE CERTIFICATE

This is to certify that this Professional Training Report is the bonafide work of
Monish K.B (Reg. No. 37110454) and NALLANI KAMALAKAR
(Reg.no:37110487)
who underwent the professional training in “NUMBER PLATE

RECOGNIATION IN BOTH IMAGE AND VIDEO USING PYTHON ”

form February 2021 to May 2021.

Internal Guide

Dr.S.Prayla

Shyry,M.E.,Ph.d.,

Head of the Department

Dr. S. VIGNESHWARI, M.E., Ph.D., and Dr. L. Laksmanan, M.E., Ph.D.,

Submitted for Viva voce Examination held on

iii
Internal Examiner External Examiner

i
v
DECLARATION

We, Monish K.B (Reg. No. 37110454) and NALLANI KAMALAKAR


(Reg.no:37110487)

hereby declare that the professional Training Project Report “NUMBER PLATE

RECOGNIATION IN BOTH IMAGE AND VIDEO USING PYTHON” done by

me under the

guidance of Dr.S.Prayla Shyry,M.E.,Ph.d., at Sathyabama institute of Science and

Technology is submitted in partial fulfilment of the requirements for the award of

Bachelor of Engineering degree in Computer Science and Engineering.

DATE:

PLACE: SIGNATURE OF THE CANDIDATE

v
ACKNOWLEDGEMENT

The satisfaction and elation that accompany the successful completion of any task
would be incomplete without the mention of the people who have made it possible. It
is our great privilege to express our gratitude and respect to all those who have guided
me during the course of my Professional Training.

First and foremost, I would express my sincere gratitude to our beloved Founder
Chancellor Col. Dr. JEPPIAAR, M.A., B.L., Ph.D., and Chancellor Dr.
MARIAZEENA JOHNSON. I extend my sincere thanks to our Pro Chancellor Dr.
MARIAZEENA JOHNSON, B.E., M.B.A., M.Phil., Ph.D., and the Vice President
Dr. MARIE JOHNSON, B.E., M.B.A., M.Phil., Ph.D., and for providing me the
necessary facilities for the completion of the professional training. I also acknowledge
our Vice Chancellor Dr. S. SUNDAR MANOHARAN Ph.D., and the Pro Vice
Chancellor Dr. T. SASIPRABA, M.E., Ph.D., for their constant support and
endorsement.

I would like to express my gratitude to our Registrar Dr. S. S. RAU, Ph.D. and
Controller of Examinations Dr. IGNISABASTI PRABU, M.E., Ph.D., for their
valuable support offered to complete my professional training successfully.

I like to express my gratitude to Dr. T. SASIKALA, M.E., Ph.D.,and Mrs. S.


VIGNESHWARI, M.E., Ph.D.,and Dr. L. Laksmanan, M.E., Ph.D.,Head of the
Department of Computer Science and Engineering, Sathyabama Institute of Science
and Technology for having been a constant source of support and encouragement for
the completion of the professional training.

v
i
I would also like to express my sincere thanks to our internal guide
Dr.S.Prayla Shyry,M.E.,Ph.d., who guided me in the preparation of the report.

v
ii
TRAINING CERTIFICATE

v
ii
ABSTRACT

Automatic number plate recognition is an image and video processing


technology that uses a number (license) plate to identify the vehicle. The object
is to design an automatic vehicle find system by vehicle number plate. The
system is used on the entrance for the security gate of a highly not allowed area
like the area around top government offices. Parliament, Court. The developed
system created and first detected the vehicle and then captures the vehicle plate
image. The vehicle number plate region is extracted using the image
segmentation in an image. The optical character recognition technology is used
for character recognition. The resulting information is then used to compare
with the records on a database to come up with distinct information like the
owner of the vehicle, place of registration, address. The system is implemented
and simulated and its performance is tested on a real image. It is seen from the
experiment that the developed system successfully captures and recognizes the
vehicle number plate on real images.

Vehicle’s plate number is a unique identity by which individual vehicle can be


identified. Vehicle number plate detection is use it in toll tax collection, traffic
chalans and can also be used in multilevel parking areas. Keeping the time in
mind, we have worked on this project. In this project, we will click the picture
of the vehicle with the help of CCTV and find the number plate of the car.
With this, no car owner will have to wait in the toll plaza, it will work like a
kind of fast-track system. And this project can be worked not only in the toll
plaza but also in the traffic system and multiple parking areas like big bazars,
collages, shopping malls and etc. Keyword: - vehicle number plate detection,
edge detection algorithm, python, OpenCV.

9
CHAPTER 1 INTRODUCTION

Number-plate recognition is a technology that uses optical character


recognition( OCR) on frames to read vehicle registration plates to have data and
cameras specifically designed for the task. police forces around the world for law
enforcement purposes, including to check if a vehicle is registered or Licensed It
is also used for electronic toll gate collection onpay per use of roads and as a
method of cataloguing the movements of traffic, for example by highways
agencies . Number-plate recognition can be used to store the images captured by
the cameras as well as the text from the license plate, with some configurable to
store a photograph of the driver. configurable to store a photograph of the driver.
Systems commonly use infrared lighting to allow the camera to take the picture at
any time of day or night. Number-plate recognition technology must take into
account plate variations from place to place. Privacy issues have caused concerns
about Number-plate recognition, such as government tracking citizens
movements, misidentification, high error rates, and increased government
spending. Critics have described it as a form of mass surveillance.

We have created a custom function to feed Tesseract OCR the bounding box
regions of license plates found by my custom YOLOv4 model in order to read
and extract the license plate numbers. Thorough pre-processing is done on the
license plate in order to correctly extract the license plate

1
0
number from the image. The function that is in charge of doing the pre-
processing and text extraction is called recognize plate and can be found in the
file utils.py

Most of the number plate localization algorithms merge several procedures,


resulting in long computational time The results are highly dependent on the
image quality, since the reliability of the procedures severely degrades in the case
of complex, noisy pictures that contain a lot of details. Unfortunately the various
procedures barely offer remedy for this problem, precise camera adjustment is
the only solution. This means that the car must be photographed in a way that the
environment is excluded as possible and the size of the number plate is as big as
possible. Adjustment of the size is especially difficult in the case of fast cars,
since the optimum moment of exposure can hardly be guaranteed. Number Plate
Localization on the Basis of Edge Finding. The algorithms rely on the observation
that number plates usually appear as high contrast areas in the image. First, the
original car image in color is converted to black and white image which is called
as grayscale image. After this stage, with help of Opencv numbers on number
plate can be recognized.

1
1
Number Plate Recognition has been one of the useful approaches for vehicle
surveillance. It is can be applied at number of public places for fulfilling some
of the purposes like traffic safety enforcement, automatic toll text collection ,
car park system and Automatic vehicle parking system. In automated systems,
people utilize computer‐based expert systems to analyze and handle real‐life
problems such as intelligent transportation systems. Presently number plate
detection and recognition processing time is less than 50 milliseconds in many
systems.

The escalating increase of contemporary urban and national road networks


over the last decades emerged the need of efficient monitoring and
management of road traffic. Meanwhile, rising vehicle use causes social
problems such as accidents, traffic congestion, and consequent traffic
pollution.

Real Time Number Plate Recognition is a process where vehicles are identified
or recognized using their number plate or license plate by using image
processing techniques so as to extract the vehicle number plate from digital
images.

systems normally comprises of two components: A camera that used in


capturing of vehicle number plate images, and software that extracts the
number plates from the captured images by using a character

1
2
recognition tool that allows for pixels to be translated into numerical readable
characters. It is used widely in various fields such as vehicle tracking, traffic
monitoring, automatic payment of tolls on highways or bridges, Surveillance
systems, tolls collection points, and parking
management systems.

This algorithms are generally divided in four steps:


(1)Vehicle image acquisition

(2)Number plate extraction

(3)Character segmentation and

(4)Character recognition.

The first step to capture image of vehicle looks very easy but it is quite exigent

task as it is very difficult to capture image of moving vehicle in real time in

such a manner that none of the component of vehicle especially the vehicle

number plate should be missed. The success of fourth step depends on how

second and third step are able to locate vehicle number plate and separate each

character.

1
3
CHAPTER 2

AIM AND LITERATURE SURVEY

2.1 AIM :

The aim of the project is to design an efficient automatic authorized vehicle


identification system by using the vehicle number plate. So that we can
identify text on number plate. So with help of this project we can reduce
crime. We can install in toll gates and so many uses are there.

2.2 GENERAL :

This project is done by using python, open cv, tesseractV4. Number-plate


recognition is a technology that uses optical character recognition on images
to read vehicle registration plates to create vehicle location data. It can use
existing closed-circuit television, road-rule enforcement cameras, or cameras
specifically designed for the task. Number-plate recognition is used by
police forces around the world for law enforcement purposes, including to
check if a vehicle is registered or licensed. It is also used for electronic
toll collection on pay-per-use roads and as a method of cataloguing the
movements of traffic.

1
4
PROPOSED SYSTEM :
In this paper, I have created this project keeping in
mind the time, in which the help of the CCTV camera will automatically
click the picture of the vehicle which will be saved in Databased will take
back from the database and it will be output in 5 steps and will detect the
number plate in the last

IMPLEMENTATION
Implementation steps
In this section, we will talk about the means which were actualized while doing the
examination. We will be performing testing on different number plates to get the accuracy of
the system.

• The first step is that image of vehicle will be captured with CCTV camera.
• Captured images will be saved in the cloud s3 bucket.
• Then we have to use canny edge detection algorithm, in this algorithm we will used some
functions: - Original input/image that image of vehicle will be captured with CCTV camera.

1
5
Fig 1.2 – Original input/image
Grayscale
Gray scaling is the way toward changing over a picture from other shading spaces for
example RGB, CMYK, HSV, and so on to shades of dim. It shifts between complete dark and
complete white. We can likewise change a picture over to grayscale utilizing the standard
RGB to grayscale conversion formula that is imgGray = 0.2989 * R + 0.5870 * G + 0.1140 *
B. [18]

fig 1.3 – Grayscale


Bilateral Filter
A bilateral filter is used for smoothening images and reducing noise, while preserving
edges. these convolutions often result in a loss of important edge information, since they blur
out everything, irrespective of it being noise or an edge. [20]

Fig 1.4 – Bilateral Filter


Sobel
Sobel edge indicator is an inclination put together strategy based with respect to
first-arrange subordinates. It ascertains the main subsidiaries of the picture independently for
1
6
the X and Y axes. [18]

Fig 1.5 – Sobel edge detection


Canny Edge Detection
First argument is our input image. Second and third arguments are our minVal and
maxVal respectively. Third argument is aperture size. It is the size of Sobel kernel used for
find image gradients. By default, it is 3. Last argument is L2gradient which specifies the
equation for finding gradient magnitude. If it is True, it uses the equation mentioned above
which is more accurate, otherwise it uses this function: Edge\_Gradient \; (G) = |G_x|
+ |G_y|. By default, it is False. [18-19]

fig 1.6 – Canny Edge Detection


Contour:
Contour is to detect geometrical shapes in images, and this can be quite useful for
simplifying problems that involve classification or object detection.

CHAPTER 3
Methodology

1
7
3.1 HARDWARE REQUIREMENTS

• System : I3 PROCESSOR

• Hard Disk : 300 GB.

• Monitor : 15 VGA Colour.

• Ram : 4 GB

3.2 SOFTWARE REQUIREMENTS

• Operating system : Windows 10

• Programming Language : python

• Application used : visual studio, Anaconda , Tesseractv4

1
8
3.3 OVERVIEW OF SOFTWARE REQUIREMENTS

HISTORY OF PYTHON :

Python was conceived in the late 1980s, and its implementation began in
December 1989 by “Guido van Rossum” at Centrum Wiskunde &
Informatica (CWI) in the ” Netherlands “ as a successor to the ABC language
capable of exception handling and interfacing with the Amoeba operating
system.

Python 3.0 (initially called Python 3000 or py3k) was released on 3


December 2008 after a long testing period. It is a major revision of the
language that is not completely backward-compatible with previous
versions. However,many of its major features have been backported to the
Python 2.6.x and 2.7.x version series, and releases of Python 3
include the 2 to3 utility, which automates the translation of Python 2
code to Python 3.

FEATURES OF PYTHON

Python is a dynamic, high level, free open source and interpreted programming
language. It supports object-oriented programming as well as procedural
oriented programming. Python is a multi-paradigm

1
9
programming language. and structured programming are fully supported, and
many of its features support functional programming and aspect- oriented
programming including by meta programming. Many other paradigms are
supported via extensions, including design by contract and logic programming
Python uses dynamic typing, and a combination of reference counting and a
cycle-detecting garbage collector for memory management. It also features
dynamic name re-solution, which binds method and variable names during
program execution .

OpenCV :

OpenCV was started at Intel in 1999 by Gary Bradsky, and the first release came
out in 2000. Vadim Pisarevsky joined Gary Bradsky to manage Intel's Russian
software OpenCV team. In 2005, OpenCV was used on Stanley, the vehicle that
won the 2005 DARPA Grand Challenge. Later, its active development continued
under the support of Willow Garage with Gary Bradsky and Vadim Pisarevsky
leading the project. OpenCV now supports a multitude of algorithms related to
Computer Vision and Machine Learning and is expanding day by day. OpenCV
supports a wide variety of programming languages such as C++, Python, Java,
etc., and is available on different platforms including Windows, Linux, OS X,
Android, and iOS. Interfaces for high-speed GPU operations based on CUDA and
OpenCL are also under active development.

2
0
OpenCV-Python is the Python API for OpenCV, combining the best qualities of
the OpenCV C++ API and the Python language. OpenCV- Python is a library of
Python bindings designed to solve computer vision problems.

Python is a general purpose programming language started by Guido van Rossum


that became very popular very quickly, mainly because of its simplicity and code
readability. It enables the programmer to express ideas in fewer lines of code
without reducing readability.
Advantages of OpenCV:.
Speed: Matlab is built on Java, and Java is built upon C. So, when you run a
Matlab program, your computer is busy trying to interpret all that Matlab code.
Then it turns it into Java, and then finally executes the code. OpenCV, on the
other hand, is basically a library of functions written in C/C++. You are closer to
directly provide machine language code to the computer to get executed. So
ultimately you get more image processing done for your computers processing
cycles, and not more interpreting. As a result of this, programs written in OpenCV
run much faster than similar programs written in Matlab. So, conclusion?
OpenCV is damn fast when it comes to speed of execution. For example, we
might write a small program to detect peoples’ smiles in a sequence of video
frames. In Matlab, we would typically get 3- 4 frames analyzed per second. In
OpenCV, we would get at least 30 frames per second, resulting in real-time
detection. 2. Resources needed: Due to the highlevel nature of Matlab, it uses a lot
of your systems resources. And I mean A LOT! Matlab code requires over a gig
of RAM to run through video. In comparison, typical OpenCV programs only
require ~70mb of RAM to run in real-time. 3. Cost: List price for the base (no
toolboxes) MATLAB (commercial, single user License) is around USD 2150.
2
1
OpenCV (BSD license) is free! Now, how do you beat that? 4. Portability:
MATLAB and OpenCV run equally well on Windows, Linux and MacOS.
However, when it comes to OpenCV, any device that can run C, can, in all
probability, run OpenCV. 5. Specific: OpenCV was made for image processing.
Each function and data structure was designed with the Image Processing coder in
mind. Matlab, on the other hand, is quite generic. You get almost anything in the
world in the form of toolboxes. All the way from financial

Methodology :

This project consists of three main stages

1) Number Plate Localization

2) Character Segmentation

3) Optical Character Recognition (OCR).

 The Number Plate Localization stage is where the Number Plate is

being detected.

2
2
 The Character Segmentation is second stage where each characterfrom the

detected Number Plate is segmented before recognition.

 In the last stage, characters are segmented from the Number Plate so that only
useful information is retained for recognition where the image format
converted into characters.

In this project to find number plate in a image we are using image


segmentation method. After finding number plate we convert the colour image
into grey mode. It is essential to convert the colour number plate into grey stage
mode. So that we can convert the grey ito black and white. After converting the
number plate into grey colour then convert into black and white. So that system
can easily understand the letters and numbers present in number plate. Now
licence plate can be mapped out into individual images. This is called Character
Segmentation.

image segmentation method :

Digital image processing is the use of computer algorithms to perform image


processing on digital images. Image segmentation is an important and challenging
process of image processing. Image segmentation technique is used to partition an
image into meaningful parts having similar features and properties. The main aim
of segmentation is simplification i.e. representing an image into meaningful and
easily analyzable way. Image segmentation is necessary first step in image
analysis. The goal of image segmentation is to divide an image into

2
3
several parts/segments having similar features or attributes. The basic
applications of image segmentation are: Content-based image retrieval, Medical
imaging, Object detection and Recognition Tasks, Automatic traffic control
systems and Video surveillance, etc.

There are several existing techniques which are used for image segmentation.
These all techniques have their own importance. These all techniques can be
approached from two basic approaches of segmentation. Those two are region
based approache and edge based approache. For this project we have used edge
based approache.

Character segmentation has long been a critical area of the OCR process. The
higher recognition rates for isolated characters vs. those obtained for words and
connected character strings well illustrate this fact. A good part of recent progress
in reading unconstrained printed and written text may be ascribed to more
insightful handling of segmentation. This paper provides a review of these
advances. The aim is to provide an appreciation for the range of techniques that
have been developed, rather than to simply list sources. Segmentation methods
are listed under four main headings. What may be termed the "classical" approach
consists of methods that partition the input image into subimages, which are then
classified. The operation of attempting to decompose the image into classifiable
units is called "dissection". The second class of methods avoids dissection, and
segments the image either explicitly, by

2
4
classification of prespecified windows, or implicitly by classification of subsets
of spatial features collected from the image as a whole. The third strategy is a
hybrid of the first two, employing dissection together with recombination rules to
define potential segments, but using classification to select from the range of
admissible segmentation possibilities offered by these subimages. Finally,
holistic approaches that avoid segmentation by recognizing entire character
strings as units are described.

Optical character recognition

Optical character recognition is the electronic or mechanical conversion of


images of typed, handwritten or printed text into machine-encoded text, whether
from a scanned document, a photo of a document, a scene-photo

or from subtitle text superimposed on an image. Widely used as a form of data

entry from printed paper data records – whether passport documents, invoices,

bank statements, computerized receipts, business cards, mail, printouts of static-

data, or any suitable documentation – it is a common method of digitizing printed

texts so that they can be electronically edited, searched, stored more compactly,

displayed on-line, and used in machine processes such as cognitive computing,

machine translation, text-to-speech, key data and text mining.

1
3
OCR is a field of research in pattern recognition, artificial intelligence and

computer vision. Early versions needed to be trained with images of each

character, and worked on one font at a time. Advanced systems capable of

producing a high degree of recognition accuracy for most fonts are now common,

and with support for a variety of digital image file format inputs. Some systems

are capable of reproducing formatted output that closely approximates the original

page including images, columns, and other non-textual components.

YOLO algorithm:

YOLO stands for You Only Look Once. YOLO is an effective real-time object
recognition algorithm, first described in the seminal 2015 paper by Joseph
Redmon et al. In this article we introduce the concept of object detection, the
YOLO algorithm itself, and one of the algorithm’s open source implementations.
There are versions in yolo. Here we are using version 4.

Image classification is one of the many exciting applications of


convolutional neural networks. Aside from simple image classification, there
are plenty of fascinating problems in computer

14
vision, with object detection being one of the most interesting. It is
commonly associated with self-driving cars where systems blend
computer vision, LIDAR and other technologies to generate a
multidimensional representation of the road with all its participants.
Object detection is also commonly used in video surveillance, especially
in crowd monitoring to prevent terrorist attacks, count people for general
statistics or analyze customer experience with walking paths within
shopping centers.

Object Detection Overview :

There are several stages in Object Detection. Image Classification goes


through levels of incremental complexity. Image classification , Object
localization, Object detection.

Image classification aims at assigning an image to one of a number of


different categories. Object localization then allows us to locate our
object in the image . In a real real-life scenario, we need to go beyond
locating just one object but rather multiple objects in one image. For
example, a self-driving car has to find the location of other cars, traffic
lights, signs, humans and to take appropriate action based on this
information. Object detection provides the tools for doing just that –
finding all the objects in an image and drawing the so-called bounding
boxes around them. There are also some situations where we want to find
exact boundaries of our objects in the process called instance
segmentation
Materials :
The following materials are used in the research work; 1. Mathwork- Open
CV 2. Mathwork- Simulink 3. Open CV- Open CV computer library vision.
Equipment The following virtual equipment are used in the research work; 1.
15
Oscilloscope 2. Generators etc .

K-NN Algorithm Procedure


1: Begin
2: Input: Original Image
3: Output: Characters
4: Method: K-Nearest Neighbors
5: LP: License Plate
6: Convert RGB image to Grayscale
7: Filter Morphological Transformation
8: Transforms Grayscale image to binary
image
9: Filter Gaussian for Blurs image
10: Finding all contours in image
11: Search & recognize all possible character
in image
12: Crop part of image with highest candidate
LP
13: Crop the LP from original image
14: Apply steps from 6 to 11 again on crop
image
15: Print the characters in LP 16: End

16
Ch-4

DISCUSSION

Character segmentation is very important in order to perform character recognition with good
amount of accuracy. Sometimes character recognition is not possible due to error in character
segmentation. In some literature of NPR, character segmentation is not discussed with details.
Some methods such as image binarization, CCA, vertical and horizontal projection can produce
better results of character segmentation. It can detect and recognize vehicle plates from various
distances. The distance affects the size of the number plate in an image. Once the vehicle
number plate is detected, the individual characters are recognized using the OCR algorithm.
The OCR use correlation method for the character recognition and the probability of the
recognition can also be calculated. The system is computationally inexpensive and can also be
implemented for real time vehicle identification system

Ch-5

Steps to implement Automatic Number Plate Recognition (ANPR) with Python

Step 1: Installing and Importing Required Dependencies


We are using OpenCV, EasyOCR, PyTorch and imutils. Before using these dependencies let
us understand why we are using it.
 OpenCV: An open-source computer vision and machine learning library designed to
facilitate various computer vision applications.
 EasyOCR: A Python library for Optical Character Recognition, designed for extracting
text from images.
 PyTorch: An open-source machine learning framework, required for EasyOCR.
 imutils: A collection of utility functions for image processing.
Step 2: Image Preprocessing
 Before proceeding next step we need to clean up the image by using some image pre-
processing techniques. This includes converting image to grayscale, applying noise
reduction techniques, et
Step 3: Edge detection
 Here in this image we need to find where the edges are, so that we can separate out the
number plate from image. The Canny edge detection algorithm help us to do so. it acts
like a super-sharp eye. it detects even the faintest edges. It works in steps:
 In first step, it smooths the photo to remove noise, then it scans for areas where brightness
changes sharply (these are likely edges). it used double thresholding for segregating strong
and weak edges.
 Lastly it perform edge tracking by hysteresis.
 This will leaves us with a clean image highlighting only the important edges, making it
easier to see the shapes and outlines in the photo.
Step 4: Find contours and apply mark to separate out actual number plate
17
If you are wondering what are contours, Contours can be understood as the
boundaries of a shape with the same intensity in an image. They are used in image analysis to
identify objects in an image or separate an object from the background. The
cv2.findContours() function is used to detect these contours in binary images.
 First line of the given snippet will find all the contours in the provided image. then based
on the tuple value of those contours are stored in contours variable. after that sorting is
performed to based on contours area and top 10 contours are finalized for further
processing.
 This code is will find a contour that approximates a polygon with four sides, which could
potentially be the license plate in an Automatic Number Plate Recognition (ANPR)
system. We are looping over top 10 contours and finding best fit for number plate. We are
checking whether any contour have potentially 4 sides, because our number plate have 4
sides and if found it could be our number plate
Here, you can see we are getting the co-ordinates of number plate as output. After this our
next step will be masking out only area which include number plate so that later when we are
going to extract text from it using OCR, we can do it efficiently. To mask out we will execute
the following code.

Step 5: Extract text from images using OCR


 Now it is crucial step in ANPR to convert image into text. This step will help us to use the
number plate data. we can store the data on number plate on database and use it later for
number of applications, like automatic toll or automatic parking charges, etc.
Following code will convert the image into text using EasyOCR library.
Step 6: Display the final output
Now we got the text from the number plate it is time to show it on original image. code will
extracts the recognized text from the OCR result, then uses OpenCV to draw this text and a
rectangle around the license plate on the original image. The text is positioned just near the
license plate and the rectangle is drawn around the license plate based on its approximated
location. The final image, with the overlaid text and rectangle, is then displayed.

18
Coding :
Gui.py

import tkinter as tk
from tkinter import filedialog
from tkinter import *
from PIL import ImageTk, Image
from tkinter import PhotoImage
import numpy as np
import cv2
import pytesseract as tess
def clean2_plate(plate):
gray_img = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)

_, thresh = cv2.threshold(gray_img, 110, 255, cv2.THRESH_BINARY)


num_contours,hierarchy = cv2.findContours(thresh.copy(),cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)

if num_contours:
contour_area = [cv2.contourArea(c) for c in num_contours]
max_cntr_index = np.argmax(contour_area)

max_cnt = num_contours[max_cntr_index]
max_cntArea = contour_area[max_cntr_index]
x,y,w,h = cv2.boundingRect(max_cnt)

if not ratioCheck(max_cntArea,w,h):
return plate,None

final_img = thresh[y:y+h, x:x+w]


return final_img,[x,y,w,h]

else:
return plate,None

def ratioCheck(area, width, height):


ratio = float(width) / float(height)
if ratio < 1:
ratio = 1 / ratio
if (area < 1063.62 or area > 73862.5) or (ratio < 3 or ratio > 6):
return False
return True

def isMaxWhite(plate):
avg = np.mean(plate)
if(avg>=115):
return True
19
else:
return False

def ratio_and_rotation(rect):
(x, y), (width, height), rect_angle = rect

if(width>height):
angle = -rect_angle
else:
angle = 90 + rect_angle

if angle>15:
return False

if height == 0 or width == 0:
return False

area = height*width
if not ratioCheck(area,width,height):
return False
else:
return True

top=tk.Tk()
top.geometry('900x700')
top.title('Number Plate Recognition')
# top.wm_iconbitmap('/home/shivam/Dataflair/Keras Projects_CIFAR/GUI/logo.ico')
top.iconphoto(True, PhotoImage(file="/home/shivam/Dataflair/Keras
Projects_CIFAR/GUI/logo.png"))
img = ImageTk.PhotoImage(Image.open("logo.png"))
top.configure(background='#CDCDCD')
label=Label(top,background='#CDCDCD', font=('arial',35,'bold'))
# label.grid(row=0,column=1)
sign_image = Label(top,bd=10)
plate_image=Label(top,bd=10)
def classify(file_path):

#######################################################

res_text=[0]
res_img=[0]
img = cv2.imread(file_path)
# cv2.imshow("input",img)

# if cv2.waitKey(0) & 0xff == ord('q'):


# pass
img2 = cv2.GaussianBlur(img, (3,3), 0)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

img2 = cv2.Sobel(img2,cv2.CV_8U,1,0,ksize=3)
_,img2 = cv2.threshold(img2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

20
element = cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(17, 3))
morph_img_threshold = img2.copy()
cv2.morphologyEx(src=img2, op=cv2.MORPH_CLOSE, kernel=element,
dst=morph_img_threshold)
num_contours, hierarchy=
cv2.findContours(morph_img_threshold,mode=cv2.RETR_EXTERNAL,method=cv2.CHAIN_APPROX_N
ONE)
cv2.drawContours(img2, num_contours, -1, (0,255,0), 1)

for i,cnt in enumerate(num_contours):

min_rect = cv2.minAreaRect(cnt)

if ratio_and_rotation(min_rect):

x,y,w,h = cv2.boundingRect(cnt)
plate_img = img[y:y+h,x:x+w]
print("Number identified number plate...")
# cv2.imshow("num plate image",plate_img)
# if cv2.waitKey(0) & 0xff == ord('q'):
# pass
res_img[0]=plate_img
cv2.imwrite("result.png",plate_img)
if(isMaxWhite(plate_img)):
clean_plate, rect = clean2_plate(plate_img)

if rect:
fg=0
x1,y1,w1,h1 = rect
x,y,w,h = x+x1,y+y1,w1,h1
plate_im = Image.fromarray(clean_plate)
text = tess.image_to_string(plate_im, lang='eng')
res_text[0]=text
if text:
break
# print("Number Detected Plate Text : ",text)

#######################################################
label.configure(foreground='#011638', text=res_text[0])
# plate_img.configure()
uploaded=Image.open("result.png")
im=ImageTk.PhotoImage(uploaded)
plate_image.configure(image=im)
plate_image.image=im
plate_image.pack()
plate_image.place(x=560,y=320)
def show_classify_button(file_path):
classify_b=Button(top,text="Classify Image",command=lambda:
classify(file_path),padx=10,pady=5)
classify_b.configure(background='#364156',
foreground='white',font=('arial',15,'bold'))
21
classify_b.place(x=490,y=550)
# classify_b.pack(side=,pady=60)
def upload_image():
try:
file_path=filedialog.askopenfilename()
uploaded=Image.open(file_path)
uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))
im=ImageTk.PhotoImage(uploaded)
sign_image.configure(image=im)
sign_image.image=im
label.configure(text='')
show_classify_button(file_path)
except:
pass
upload=Button(top,text="Upload an image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156', foreground='white',font=('arial',15,'bold'))
upload.pack()
upload.place(x=210,y=550)
# sign_image.pack(side=BOTTOM,expand=True)
sign_image.pack()
sign_image.place(x=70,y=200)

# label.pack(side=BOTTOM,expand=True)
label.pack()
label.place(x=500,y=220)
heading = Label(top,image=img)
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()

main.py

import numpy as np
import cv2
from PIL import Image
import pytesseract as tess
def clean2_plate(plate):
gray_img = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)

_, thresh = cv2.threshold(gray_img, 110, 255, cv2.THRESH_BINARY)


22
if cv2.waitKey(0) & 0xff == ord('q'):
pass
num_contours,hierarchy = cv2.findContours(thresh.copy(),cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)

if num_contours:
contour_area = [cv2.contourArea(c) for c in num_contours]
max_cntr_index = np.argmax(contour_area)

max_cnt = num_contours[max_cntr_index]
max_cntArea = contour_area[max_cntr_index]
x,y,w,h = cv2.boundingRect(max_cnt)

if not ratioCheck(max_cntArea,w,h):
return plate,None

final_img = thresh[y:y+h, x:x+w]


return final_img,[x,y,w,h]

else:
return plate,None

def ratioCheck(area, width, height):


ratio = float(width) / float(height)
if ratio < 1:
ratio = 1 / ratio
if (area < 1063.62 or area > 73862.5) or (ratio < 3 or ratio > 6):
return False
return True

def isMaxWhite(plate):
avg = np.mean(plate)
if(avg>=115):
return True
else:
return False

def ratio_and_rotation(rect):
(x, y), (width, height), rect_angle = rect

if(width>height):
angle = -rect_angle
else:
angle = 90 + rect_angle

if angle>15:
return False

if height == 0 or width == 0:
return False

area = height*width
23
if not ratioCheck(area,width,height):
return False
else:
return True

img = cv2.imread("testData/sample15.jpg")
print("Number input image...",)
cv2.imshow("input",img)

if cv2.waitKey(0) & 0xff == ord('q'):


pass
img2 = cv2.GaussianBlur(img, (3,3), 0)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

img2 = cv2.Sobel(img2,cv2.CV_8U,1,0,ksize=3)
_,img2 = cv2.threshold(img2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

element = cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(17, 3))


morph_img_threshold = img2.copy()
cv2.morphologyEx(src=img2, op=cv2.MORPH_CLOSE, kernel=element,
dst=morph_img_threshold)
num_contours, hierarchy=
cv2.findContours(morph_img_threshold,mode=cv2.RETR_EXTERNAL,method=cv2.CHAIN_APPROX_N
ONE)
cv2.drawContours(img2, num_contours, -1, (0,255,0), 1)

for i,cnt in enumerate(num_contours):

min_rect = cv2.minAreaRect(cnt)

if ratio_and_rotation(min_rect):

x,y,w,h = cv2.boundingRect(cnt)
plate_img = img[y:y+h,x:x+w]
print("Number identified number plate...")
cv2.imshow("num plate image",plate_img)
if cv2.waitKey(0) & 0xff == ord('q'):
pass

if(isMaxWhite(plate_img)):
clean_plate, rect = clean2_plate(plate_img)
if rect:
fg=0
x1,y1,w1,h1 = rect
x,y,w,h = x+x1,y+y1,w1,h1
plate_im = Image.fromarray(clean_plate)
text = tess.image_to_string(plate_im, lang='eng')
print("Number Detected Plate Text : ",text)

24
output:

25
CHAPTER 6

CONCLUSION AND FUTURE WORK

CONCLUSION :

This paper presents a recognition method in which the vehicle plate image and
video is obtained by the digital cameras and the image is processed to get the
number plate information. A rear image of a vehicle is captured and processed
using various algorithms. A good image preprocessing almost guarantees a
successful recognition. Further we are planning to study about the
characteristics involved with the automatic number plate system for better
performance. The implementation works quite well however, there is still room
for improvement. The camera used in this project is sensitive to vibration and
fast changing targets due to the long shutter time. The system robustness and
speed can be increase if high resolution camera is used. The OCR methods
used in this project for the recognition is sensitive to misalignment and to
different sizes, the affine transformation can be used to improve the OCR
recognition from different size and angles. The statistical analysis can also be
used to define the probability of detection and recognition of the vehicle
number plate.

26
Future work :

NPR can be further exploited for vehicle owner identification, vehicle model

identification traffic control, vehicle speed control and vehicle location tracking.

It can be further extended as multilingual NPR to identify the language of

characters automatically based on the training data It can provide various

benefits like traffic safety enforcement, security- in case of suspicious activity

by vehicle, easy to use, immediate information. availability- as compare to

searching vehicle owner registration details manually and cost effective for any

country For low resolution images some improvement algorithms like super

resolution of images should be focused. Most of the NPR focus on processing

one vehicle number plate but in real-time there can be more than one vehicle

number plates while the images are being captured. In multiple vehicle number

plate images are considered for NPR while in most of other systems offline

images of vehicle.

27
References

[1] Chirag Patel Smt. ChandabenMohanbhai Patel Institute of Computer


Applications (CMPICA). “Automatic Number Plate Recognition System
(ANPR): A Survey”Charotar University of Science and
Technology(CHARUSAT) International Journal of Computer Applications
(0975 – 8887) Volume 69– No.9, May 2013 21

[2] License Plate Recognition System-Survey Cite as: AIP Conference


Proceedings 1324, 255 (2010); https://doi.org/10.1063/1.3526208 Published
Online: 03 December 2010 P. R. Sanap, and S. P. Narote

[3] SNIDER: Single Noisy Image Denoising and Rectification for Improving
License Plate Recognition Younkwan Lee Juhyun Lee HoyeonAhnMoongu
Jeon Machine Learning and Vision Laboratory Gwangju Institute of Science
and Technology (GIST), Korea

[4] Rethinking and Designing a High-performing Automatic License Plate


Recognition Approach Yi Wang*, Student Member, IEEE, Zhen-Peng Bian*,
Yunhao Zhou, and Lap-Pui Chau, Fellow, IEEE

[5] International Journal of Advancements in Technology http://ijict.org/ ISSN


0976-4860 Vol 2, No 3 (July 2011) ©IJoAT 408 Automatic Number Plate
Recognition S.Kranthi, K.Pranathi, A.Srisaila Information Technology, VR
SiddharthaEngineeringCollege,Vijayawada, India

28
[6] AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM FOR
VEHICLE IDENTIFICATION USING OPTICAL CHARACTER
RECOGNITION Muhammad Tahir Qadri Department of Electronic
Engineering, 2009, Karachi. Muhammad Asif Department of Electronic
Engineering, Sir Syed University of Engineering & Technology, Karachi,
Pakistan.

[7] International Journal of Computer Science Trends and Technology (IJCST)


– Volume 5 Issue 2, Mar – Apr 2017 ISSN: 2347-8578 www.ijcstjournal.org
Page 291 A Survey On Automatic Vehicle Number Plate Detection System
Aruna Bajpai Assistant Professor Department of Computer Science &
Engineering ITM GOI, Gwalior.

29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy