Final Word Project
Final Word Project
USING PYTHON
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY (DEEMED TO BE
UNIVERSITY)
Accredited with Grade “A” by NAAC
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI - 600 119
i
MARCH - 2021
ii
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A” by NAAC
(Established under Section 3 of UGC Act, 1956)
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI– 600119
BONAFIDE CERTIFICATE
This is to certify that this Professional Training Report is the bonafide work of
Monish K.B (Reg. No. 37110454) and NALLANI KAMALAKAR
(Reg.no:37110487)
who underwent the professional training in “NUMBER PLATE
Internal Guide
Dr.S.Prayla
Shyry,M.E.,Ph.d.,
iii
Internal Examiner External Examiner
i
v
DECLARATION
hereby declare that the professional Training Project Report “NUMBER PLATE
me under the
DATE:
v
ACKNOWLEDGEMENT
The satisfaction and elation that accompany the successful completion of any task
would be incomplete without the mention of the people who have made it possible. It
is our great privilege to express our gratitude and respect to all those who have guided
me during the course of my Professional Training.
First and foremost, I would express my sincere gratitude to our beloved Founder
Chancellor Col. Dr. JEPPIAAR, M.A., B.L., Ph.D., and Chancellor Dr.
MARIAZEENA JOHNSON. I extend my sincere thanks to our Pro Chancellor Dr.
MARIAZEENA JOHNSON, B.E., M.B.A., M.Phil., Ph.D., and the Vice President
Dr. MARIE JOHNSON, B.E., M.B.A., M.Phil., Ph.D., and for providing me the
necessary facilities for the completion of the professional training. I also acknowledge
our Vice Chancellor Dr. S. SUNDAR MANOHARAN Ph.D., and the Pro Vice
Chancellor Dr. T. SASIPRABA, M.E., Ph.D., for their constant support and
endorsement.
I would like to express my gratitude to our Registrar Dr. S. S. RAU, Ph.D. and
Controller of Examinations Dr. IGNISABASTI PRABU, M.E., Ph.D., for their
valuable support offered to complete my professional training successfully.
v
i
I would also like to express my sincere thanks to our internal guide
Dr.S.Prayla Shyry,M.E.,Ph.d., who guided me in the preparation of the report.
v
ii
TRAINING CERTIFICATE
v
ii
ABSTRACT
9
CHAPTER 1 INTRODUCTION
We have created a custom function to feed Tesseract OCR the bounding box
regions of license plates found by my custom YOLOv4 model in order to read
and extract the license plate numbers. Thorough pre-processing is done on the
license plate in order to correctly extract the license plate
1
0
number from the image. The function that is in charge of doing the pre-
processing and text extraction is called recognize plate and can be found in the
file utils.py
1
1
Number Plate Recognition has been one of the useful approaches for vehicle
surveillance. It is can be applied at number of public places for fulfilling some
of the purposes like traffic safety enforcement, automatic toll text collection ,
car park system and Automatic vehicle parking system. In automated systems,
people utilize computer‐based expert systems to analyze and handle real‐life
problems such as intelligent transportation systems. Presently number plate
detection and recognition processing time is less than 50 milliseconds in many
systems.
Real Time Number Plate Recognition is a process where vehicles are identified
or recognized using their number plate or license plate by using image
processing techniques so as to extract the vehicle number plate from digital
images.
1
2
recognition tool that allows for pixels to be translated into numerical readable
characters. It is used widely in various fields such as vehicle tracking, traffic
monitoring, automatic payment of tolls on highways or bridges, Surveillance
systems, tolls collection points, and parking
management systems.
(4)Character recognition.
The first step to capture image of vehicle looks very easy but it is quite exigent
such a manner that none of the component of vehicle especially the vehicle
number plate should be missed. The success of fourth step depends on how
second and third step are able to locate vehicle number plate and separate each
character.
1
3
CHAPTER 2
2.1 AIM :
2.2 GENERAL :
1
4
PROPOSED SYSTEM :
In this paper, I have created this project keeping in
mind the time, in which the help of the CCTV camera will automatically
click the picture of the vehicle which will be saved in Databased will take
back from the database and it will be output in 5 steps and will detect the
number plate in the last
IMPLEMENTATION
Implementation steps
In this section, we will talk about the means which were actualized while doing the
examination. We will be performing testing on different number plates to get the accuracy of
the system.
• The first step is that image of vehicle will be captured with CCTV camera.
• Captured images will be saved in the cloud s3 bucket.
• Then we have to use canny edge detection algorithm, in this algorithm we will used some
functions: - Original input/image that image of vehicle will be captured with CCTV camera.
1
5
Fig 1.2 – Original input/image
Grayscale
Gray scaling is the way toward changing over a picture from other shading spaces for
example RGB, CMYK, HSV, and so on to shades of dim. It shifts between complete dark and
complete white. We can likewise change a picture over to grayscale utilizing the standard
RGB to grayscale conversion formula that is imgGray = 0.2989 * R + 0.5870 * G + 0.1140 *
B. [18]
CHAPTER 3
Methodology
1
7
3.1 HARDWARE REQUIREMENTS
• System : I3 PROCESSOR
• Ram : 4 GB
1
8
3.3 OVERVIEW OF SOFTWARE REQUIREMENTS
HISTORY OF PYTHON :
Python was conceived in the late 1980s, and its implementation began in
December 1989 by “Guido van Rossum” at Centrum Wiskunde &
Informatica (CWI) in the ” Netherlands “ as a successor to the ABC language
capable of exception handling and interfacing with the Amoeba operating
system.
FEATURES OF PYTHON
Python is a dynamic, high level, free open source and interpreted programming
language. It supports object-oriented programming as well as procedural
oriented programming. Python is a multi-paradigm
1
9
programming language. and structured programming are fully supported, and
many of its features support functional programming and aspect- oriented
programming including by meta programming. Many other paradigms are
supported via extensions, including design by contract and logic programming
Python uses dynamic typing, and a combination of reference counting and a
cycle-detecting garbage collector for memory management. It also features
dynamic name re-solution, which binds method and variable names during
program execution .
OpenCV :
OpenCV was started at Intel in 1999 by Gary Bradsky, and the first release came
out in 2000. Vadim Pisarevsky joined Gary Bradsky to manage Intel's Russian
software OpenCV team. In 2005, OpenCV was used on Stanley, the vehicle that
won the 2005 DARPA Grand Challenge. Later, its active development continued
under the support of Willow Garage with Gary Bradsky and Vadim Pisarevsky
leading the project. OpenCV now supports a multitude of algorithms related to
Computer Vision and Machine Learning and is expanding day by day. OpenCV
supports a wide variety of programming languages such as C++, Python, Java,
etc., and is available on different platforms including Windows, Linux, OS X,
Android, and iOS. Interfaces for high-speed GPU operations based on CUDA and
OpenCL are also under active development.
2
0
OpenCV-Python is the Python API for OpenCV, combining the best qualities of
the OpenCV C++ API and the Python language. OpenCV- Python is a library of
Python bindings designed to solve computer vision problems.
Methodology :
2) Character Segmentation
being detected.
2
2
The Character Segmentation is second stage where each characterfrom the
In the last stage, characters are segmented from the Number Plate so that only
useful information is retained for recognition where the image format
converted into characters.
2
3
several parts/segments having similar features or attributes. The basic
applications of image segmentation are: Content-based image retrieval, Medical
imaging, Object detection and Recognition Tasks, Automatic traffic control
systems and Video surveillance, etc.
There are several existing techniques which are used for image segmentation.
These all techniques have their own importance. These all techniques can be
approached from two basic approaches of segmentation. Those two are region
based approache and edge based approache. For this project we have used edge
based approache.
Character segmentation has long been a critical area of the OCR process. The
higher recognition rates for isolated characters vs. those obtained for words and
connected character strings well illustrate this fact. A good part of recent progress
in reading unconstrained printed and written text may be ascribed to more
insightful handling of segmentation. This paper provides a review of these
advances. The aim is to provide an appreciation for the range of techniques that
have been developed, rather than to simply list sources. Segmentation methods
are listed under four main headings. What may be termed the "classical" approach
consists of methods that partition the input image into subimages, which are then
classified. The operation of attempting to decompose the image into classifiable
units is called "dissection". The second class of methods avoids dissection, and
segments the image either explicitly, by
2
4
classification of prespecified windows, or implicitly by classification of subsets
of spatial features collected from the image as a whole. The third strategy is a
hybrid of the first two, employing dissection together with recombination rules to
define potential segments, but using classification to select from the range of
admissible segmentation possibilities offered by these subimages. Finally,
holistic approaches that avoid segmentation by recognizing entire character
strings as units are described.
entry from printed paper data records – whether passport documents, invoices,
texts so that they can be electronically edited, searched, stored more compactly,
1
3
OCR is a field of research in pattern recognition, artificial intelligence and
producing a high degree of recognition accuracy for most fonts are now common,
and with support for a variety of digital image file format inputs. Some systems
are capable of reproducing formatted output that closely approximates the original
YOLO algorithm:
YOLO stands for You Only Look Once. YOLO is an effective real-time object
recognition algorithm, first described in the seminal 2015 paper by Joseph
Redmon et al. In this article we introduce the concept of object detection, the
YOLO algorithm itself, and one of the algorithm’s open source implementations.
There are versions in yolo. Here we are using version 4.
14
vision, with object detection being one of the most interesting. It is
commonly associated with self-driving cars where systems blend
computer vision, LIDAR and other technologies to generate a
multidimensional representation of the road with all its participants.
Object detection is also commonly used in video surveillance, especially
in crowd monitoring to prevent terrorist attacks, count people for general
statistics or analyze customer experience with walking paths within
shopping centers.
16
Ch-4
DISCUSSION
Character segmentation is very important in order to perform character recognition with good
amount of accuracy. Sometimes character recognition is not possible due to error in character
segmentation. In some literature of NPR, character segmentation is not discussed with details.
Some methods such as image binarization, CCA, vertical and horizontal projection can produce
better results of character segmentation. It can detect and recognize vehicle plates from various
distances. The distance affects the size of the number plate in an image. Once the vehicle
number plate is detected, the individual characters are recognized using the OCR algorithm.
The OCR use correlation method for the character recognition and the probability of the
recognition can also be calculated. The system is computationally inexpensive and can also be
implemented for real time vehicle identification system
Ch-5
18
Coding :
Gui.py
import tkinter as tk
from tkinter import filedialog
from tkinter import *
from PIL import ImageTk, Image
from tkinter import PhotoImage
import numpy as np
import cv2
import pytesseract as tess
def clean2_plate(plate):
gray_img = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)
if num_contours:
contour_area = [cv2.contourArea(c) for c in num_contours]
max_cntr_index = np.argmax(contour_area)
max_cnt = num_contours[max_cntr_index]
max_cntArea = contour_area[max_cntr_index]
x,y,w,h = cv2.boundingRect(max_cnt)
if not ratioCheck(max_cntArea,w,h):
return plate,None
else:
return plate,None
def isMaxWhite(plate):
avg = np.mean(plate)
if(avg>=115):
return True
19
else:
return False
def ratio_and_rotation(rect):
(x, y), (width, height), rect_angle = rect
if(width>height):
angle = -rect_angle
else:
angle = 90 + rect_angle
if angle>15:
return False
if height == 0 or width == 0:
return False
area = height*width
if not ratioCheck(area,width,height):
return False
else:
return True
top=tk.Tk()
top.geometry('900x700')
top.title('Number Plate Recognition')
# top.wm_iconbitmap('/home/shivam/Dataflair/Keras Projects_CIFAR/GUI/logo.ico')
top.iconphoto(True, PhotoImage(file="/home/shivam/Dataflair/Keras
Projects_CIFAR/GUI/logo.png"))
img = ImageTk.PhotoImage(Image.open("logo.png"))
top.configure(background='#CDCDCD')
label=Label(top,background='#CDCDCD', font=('arial',35,'bold'))
# label.grid(row=0,column=1)
sign_image = Label(top,bd=10)
plate_image=Label(top,bd=10)
def classify(file_path):
#######################################################
res_text=[0]
res_img=[0]
img = cv2.imread(file_path)
# cv2.imshow("input",img)
img2 = cv2.Sobel(img2,cv2.CV_8U,1,0,ksize=3)
_,img2 = cv2.threshold(img2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
20
element = cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(17, 3))
morph_img_threshold = img2.copy()
cv2.morphologyEx(src=img2, op=cv2.MORPH_CLOSE, kernel=element,
dst=morph_img_threshold)
num_contours, hierarchy=
cv2.findContours(morph_img_threshold,mode=cv2.RETR_EXTERNAL,method=cv2.CHAIN_APPROX_N
ONE)
cv2.drawContours(img2, num_contours, -1, (0,255,0), 1)
min_rect = cv2.minAreaRect(cnt)
if ratio_and_rotation(min_rect):
x,y,w,h = cv2.boundingRect(cnt)
plate_img = img[y:y+h,x:x+w]
print("Number identified number plate...")
# cv2.imshow("num plate image",plate_img)
# if cv2.waitKey(0) & 0xff == ord('q'):
# pass
res_img[0]=plate_img
cv2.imwrite("result.png",plate_img)
if(isMaxWhite(plate_img)):
clean_plate, rect = clean2_plate(plate_img)
if rect:
fg=0
x1,y1,w1,h1 = rect
x,y,w,h = x+x1,y+y1,w1,h1
plate_im = Image.fromarray(clean_plate)
text = tess.image_to_string(plate_im, lang='eng')
res_text[0]=text
if text:
break
# print("Number Detected Plate Text : ",text)
#######################################################
label.configure(foreground='#011638', text=res_text[0])
# plate_img.configure()
uploaded=Image.open("result.png")
im=ImageTk.PhotoImage(uploaded)
plate_image.configure(image=im)
plate_image.image=im
plate_image.pack()
plate_image.place(x=560,y=320)
def show_classify_button(file_path):
classify_b=Button(top,text="Classify Image",command=lambda:
classify(file_path),padx=10,pady=5)
classify_b.configure(background='#364156',
foreground='white',font=('arial',15,'bold'))
21
classify_b.place(x=490,y=550)
# classify_b.pack(side=,pady=60)
def upload_image():
try:
file_path=filedialog.askopenfilename()
uploaded=Image.open(file_path)
uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))
im=ImageTk.PhotoImage(uploaded)
sign_image.configure(image=im)
sign_image.image=im
label.configure(text='')
show_classify_button(file_path)
except:
pass
upload=Button(top,text="Upload an image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156', foreground='white',font=('arial',15,'bold'))
upload.pack()
upload.place(x=210,y=550)
# sign_image.pack(side=BOTTOM,expand=True)
sign_image.pack()
sign_image.place(x=70,y=200)
# label.pack(side=BOTTOM,expand=True)
label.pack()
label.place(x=500,y=220)
heading = Label(top,image=img)
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()
main.py
import numpy as np
import cv2
from PIL import Image
import pytesseract as tess
def clean2_plate(plate):
gray_img = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)
if num_contours:
contour_area = [cv2.contourArea(c) for c in num_contours]
max_cntr_index = np.argmax(contour_area)
max_cnt = num_contours[max_cntr_index]
max_cntArea = contour_area[max_cntr_index]
x,y,w,h = cv2.boundingRect(max_cnt)
if not ratioCheck(max_cntArea,w,h):
return plate,None
else:
return plate,None
def isMaxWhite(plate):
avg = np.mean(plate)
if(avg>=115):
return True
else:
return False
def ratio_and_rotation(rect):
(x, y), (width, height), rect_angle = rect
if(width>height):
angle = -rect_angle
else:
angle = 90 + rect_angle
if angle>15:
return False
if height == 0 or width == 0:
return False
area = height*width
23
if not ratioCheck(area,width,height):
return False
else:
return True
img = cv2.imread("testData/sample15.jpg")
print("Number input image...",)
cv2.imshow("input",img)
img2 = cv2.Sobel(img2,cv2.CV_8U,1,0,ksize=3)
_,img2 = cv2.threshold(img2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
min_rect = cv2.minAreaRect(cnt)
if ratio_and_rotation(min_rect):
x,y,w,h = cv2.boundingRect(cnt)
plate_img = img[y:y+h,x:x+w]
print("Number identified number plate...")
cv2.imshow("num plate image",plate_img)
if cv2.waitKey(0) & 0xff == ord('q'):
pass
if(isMaxWhite(plate_img)):
clean_plate, rect = clean2_plate(plate_img)
if rect:
fg=0
x1,y1,w1,h1 = rect
x,y,w,h = x+x1,y+y1,w1,h1
plate_im = Image.fromarray(clean_plate)
text = tess.image_to_string(plate_im, lang='eng')
print("Number Detected Plate Text : ",text)
24
output:
25
CHAPTER 6
CONCLUSION :
This paper presents a recognition method in which the vehicle plate image and
video is obtained by the digital cameras and the image is processed to get the
number plate information. A rear image of a vehicle is captured and processed
using various algorithms. A good image preprocessing almost guarantees a
successful recognition. Further we are planning to study about the
characteristics involved with the automatic number plate system for better
performance. The implementation works quite well however, there is still room
for improvement. The camera used in this project is sensitive to vibration and
fast changing targets due to the long shutter time. The system robustness and
speed can be increase if high resolution camera is used. The OCR methods
used in this project for the recognition is sensitive to misalignment and to
different sizes, the affine transformation can be used to improve the OCR
recognition from different size and angles. The statistical analysis can also be
used to define the probability of detection and recognition of the vehicle
number plate.
26
Future work :
NPR can be further exploited for vehicle owner identification, vehicle model
identification traffic control, vehicle speed control and vehicle location tracking.
searching vehicle owner registration details manually and cost effective for any
country For low resolution images some improvement algorithms like super
one vehicle number plate but in real-time there can be more than one vehicle
number plates while the images are being captured. In multiple vehicle number
plate images are considered for NPR while in most of other systems offline
images of vehicle.
27
References
[3] SNIDER: Single Noisy Image Denoising and Rectification for Improving
License Plate Recognition Younkwan Lee Juhyun Lee HoyeonAhnMoongu
Jeon Machine Learning and Vision Laboratory Gwangju Institute of Science
and Technology (GIST), Korea
28
[6] AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM FOR
VEHICLE IDENTIFICATION USING OPTICAL CHARACTER
RECOGNITION Muhammad Tahir Qadri Department of Electronic
Engineering, 2009, Karachi. Muhammad Asif Department of Electronic
Engineering, Sir Syed University of Engineering & Technology, Karachi,
Pakistan.
29