Irjet V8i5131
Irjet V8i5131
Siya Philip1, Shikha S Nambiar2, Shreya J3 , T V N Satya Pratyusha4 and Sneha S Bagalkot5
1Student, Dept. of Computer Science Engineering, Presidency University Bangalore, Karnataka, India
2 Student, Dept. of Computer Science Engineering, Presidency University Bangalore, Karnataka, India
3Student, Dept. of Computer Science Engineering, Presidency University Bangalore, Karnataka, India
4Student, Dept. of Computer Science Engineering, Presidency University Bangalore, Karnataka, India
5Assistant Professor, Dept. of Computer Science Engineering, Presidency University, Bangalore, Karnataka, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract – Handwriting is unique to each person, much Specific features of Handwriting are:
like a fingerprint. Since every handwriting is unique, it is I. The roundness of the letters
also referred to as the brain's fingerprint. Criminals use II. Spacing between letters
handwriting forgery to fraudulently produce, change, or III. The pressure put on paper while writing
write a person's handwriting such that it appears similar to IV. The average size of letters
the real handwriting in most cases, with the intent of V. The inclined angle of letters
profiting from the innocent party. In this present study, a
method has been proposed where the model is trained with The above are some characteristics that aid in determining
a dataset of handwriting, and predictions are made as to the authenticity of a person's handwriting. Since they will
whether a provided signature is genuine or forged based on be unique to each person, these characteristics can be
the features like ratio, centroid, eccentricity, skew and used to make decisions.
kurtosis, and solidity of the words.
We tried to assess the validity of a given input by
Key Words: Handwriting Forgery Detection, Word considering variables and comparing them in this project
Segmentation, Image Pre-processing, Feature Extraction, because this is an essential consideration when
Multi-Layer Perceptron, Neural Network, Prediction. considering the trueness of the text.
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 634
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 05 | May 2021 www.irjet.net p-ISSN: 2395-0072
pass filtering) to eliminate false alarms, and the locations the as feature extraction will now be able to give more
of the local maxima (white space between the lines) are accurate values.
determined. Line segmentation is useful for breaking up Second, we then remove any noise disturbances using
connected ascenders and descenders as well as deriving Gaussian filter in the image and then convert it into the
an automatic scale selection mechanism. binary format as it will be very easy for us to consider and
compare them later on.
To build a scale space, the line images are smoothed and
then convolved with second-order anisotropic Gaussian
derivative filters, and the blob-like features that provide
us with the focus of attention regions (i.e., words in the
original document image). A connected component
analysis of the blob image is used to extract words, which
is accompanied by a reverse mapping of the bounding
boxes. After that, the box is vertically extended to make Fig -4: Genuine Image Fig -5: Forged Image
room for the ascenders and descenders.
Images after Pre-Processing:
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 635
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 05 | May 2021 www.irjet.net p-ISSN: 2395-0072
b) Activation
3.4 MODEL
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 636
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 05 | May 2021 www.irjet.net p-ISSN: 2395-0072
c) Output Layer value. The weight aids in the connection of one layer to the
next.
They are the final hidden layer, and they are in charge of
the problem-related output values needed for the problem The first layer's input value and weight are multiplied and
statement. added to the bias of the first layer, and the output is used
as the input for the next layer.
3.4.3 Training model
a) Data set
The first step is to read the train and test data. We begin
by reading the training CSV file and performing operations
on it, such as retrieving data from a column and storing it
in an array using call values. Fig -12: Weights and Bias
Then we use the astype() function, which creates a new This is how we develop our model, and we build it by
copy of the training input with each value converted to a passing the input value.
float (in our case). It doesn't change the training input, so To feed data into the tensor flow graph, we use a
you can check the value returned by astype() to get the placeholder.
converted array.
d) Loss and Optimizer
The method to categorical() can then be used to transform The next move is to locate the optimizer and loss. The aim
a NumPy array with data representing various categories of optimization is to reduce the loss function to the
into a NumPy array with binary values. It has the same smallest possible value. If the loss is reduced to an
number of rows as the input array and the same number appropriate amount, the model will learn an indirect
of columns as the number of classes. function that maps the input to the output.
person write the same word which we will consider as the forgeries of the handwriting of other subjects. These
forged image. Each of these 10 users have given 5 images handwritings were scanned digitally and saved in a folder.
as in a user with id no 1 has given 5 samples of his real
handwriting and also 5 of the forged ones. This is so that As a result, we were able to distinguish between genuine
when performing comparison of values at the end we can and forged handwritten documents using pre-processing,
get an efficient result if we have more values. feature extraction, and training the model with genuine
Now we transfer these to function extraction, which and forged image datasets.
returns their values in CSV files. As can be seen in Figs 6
and 7, the device stores three of the values from five actual REFERENCES
images in the Training CSV file and the other two in the
Testing CSV file. [1] Navin Karanth , Vijay Desai and S. M. Kulkarni. 2011.
As can be seen in Figures 6 and 7, the situation for the 5 Development of an automated handwriting analysis
forged photos is identical, with three of them being sent to system
the Training file and the other two being sent to the
Testing file. [2] Amr Megahed, Sondos M Fadl Harbin and Qi
Han.2017. Handwriting forgery detection based on ink
colour features
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 638