0% found this document useful (0 votes)
57 views9 pages

Computers and Electronics in Agriculture: Amreen Abbas, Sweta Jain, Mahesh Gour, Swetha Vankudothu

Uploaded by

kevin gelaude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views9 pages

Computers and Electronics in Agriculture: Amreen Abbas, Sweta Jain, Mahesh Gour, Swetha Vankudothu

Uploaded by

kevin gelaude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Computers and Electronics in Agriculture 187 (2021) 106279

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture


journal homepage: www.elsevier.com/locate/compag

Original papers

Tomato plant disease detection using transfer learning with C-GAN


synthetic images
Amreen Abbas , Sweta Jain , Mahesh Gour *, Swetha Vankudothu
Maulana Azad National Institute of Technology, Bhopal, MP 462003, India

A R T I C L E I N F O A B S T R A C T

Keywords: Plant diseases and pernicious insects are a considerable threat in the agriculture sector. Therefore, early
Deep learning detection and diagnosis of these diseases are essential. The ongoing development of profound deep learning
Tomato plant disease detection methods has greatly helped in the detection of plant diseases, granting a vigorous tool with exceptionally precise
Conditional Generative Adversarial Network
outcomes but the accuracy of deep learning models depends on the volume and the quality of labeled data for
Data augmentation
training. In this paper, we have proposed a deep learning-based method for tomato disease detection that utilizes
Pre-trained DesnseNet121 network
Synthetic Images the Conditional Generative Adversarial Network (C-GAN) to generate synthetic images of tomato plant leaves.
Thereafter, a DenseNet121 model is trained on synthetic and real images using transfer learning to classify the
tomato leaves images into ten categories of diseases. The proposed model has been trained and tested extensively
on publicly available PlantVillage dataset. The proposed method achieved an accuracy of 99.51%, 98.65%, and
97.11% for tomato leaf image classification into 5 classes, 7 classes, and 10 classes, respectively. The proposed
approach shows its superiority over the existing methodologies.

1. Introduction classification of plant diseases.


The performance of CNN models heavily depends on the available
Agriculture has been the primary source of income for the majority of dataset for training. Generally, these models show improved results and
people in India. The extensive commercialization of agriculture has high generalizability on the sufficiently large dataset. The datasets that
greatly affected our environment. One of the biggest concerns in the are presently available for tomato plant disease typically do not contain
field of agriculture is the detection of plant diseases. Early detection of enough images in diverse conditions, which is a necessity for making
diseases helps in preventing the widespread of disease among other high accuracy models. Provided the dataset being tiny, the model may
plants, thus preventing substantial economic losses. The impact of plant overfit and perform badly on real environment test data. Hence, various
illness ranges from mild manifestation to the destruction of complete data augmentation techniques like affine transformation and perspec­
plantations that severely affect the agricultural economy (Savary et al., tive transformation are being used to enhance the dataset (Gour et al.,
2012). 2020). When the training images are insufficient, and image trans­
Deep learning is a concept that combines data analysis with image formation techniques are unable to change the outcomes, notably,
processing resulting in very accurate results. Nowadays, deep learning is Generative Adversarial Networks (GANs) are used for producing coun­
being extensively used in various fields like object detection (Redmon terfeit images.
et al., 2016), signal and voice recognition (Abdel-Hamid et al., 2014), In this research work, we have developed a deep learning based
biomedical image classification (Gour et al., 2020; Gour and Jain, 2020) framework to recognize the tomato plant diseases by investigating to­
and segmentation (Gour et al., 2019). Deep learning is also widely being mato leaf images. This method would help farmers in classification of
used in the field of agriculture, for the purpose of plant disease detection diseases affecting tomato cultivation by simply taking an image of
and classification. Among the deep learning techniques, Convolutional diseased leaves, instead of going after costly expert analysis. In the
Neural Network (CNN) is considered as the best method. Various ar­ proposed method, we generated synthetic images using Conditional
chitectures of CNN like AlexNet (Krizhevsky et al., 2012), GoogLeNet Generative Adversarial Network (C-GAN) (Mirza and Osindero, 2014)
(Al-Qizwini et al., 2017), etc. are being used for the detection and and augmented these images to the dataset for training purpose.

* Corresponding author.
E-mail address: maheshgour0704@gmail.com (M. Gour).

https://doi.org/10.1016/j.compag.2021.106279
Received 22 September 2020; Received in revised form 10 April 2021; Accepted 13 June 2021
Available online 29 June 2021
0168-1699/© 2021 Elsevier B.V. All rights reserved.
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Thereafter a DenseNet model (Huang et al., 2017) was trained on tomato Table 1
plant images and tomato synthetic images (generated by the C-GAN Summary of the literature of studies for plant diseases detection.
model) for the detection of tomato disease from the leaf images. Paper Methods Dataset Plants/ Accuracy
The paper is organized in the following manner: Section 2 contains classes
the related work about existing methods, Section 3 presents the meth­ Adhikari et al. YOLO internet 3 classes of 76%
odology , which includes the C-GAN model description and the (2018) images and diseases in
description of the DenseNet model, Section 4 consists of the summary of PlantVillage tomato
the dataset followed by the experimental setup and results along with plants
Fuentes et al. SSD, R-FCN, 5,000 field 9 classes of 85%
the performance comparison with the existing methos and pre-trained (2017) Faster R-CNN images diseases
CNN models. Conclusions are presented in Section 5. with VGG and and pests in
Residual tomato
2. Related work Network
Ashqar and ANN Tomato 5 classes of 99.84%
Abu-Naser PlantVillage tomato leaf
In agriculture, plant diseases cause havoc resulting in economic loss. (2018) (9,000 images) diseases
Various researchers have put forth different techniques to put a check on Zhang et al. AlexNet, Tomato 8 classes of 97.28%
these infections. Over the recent couple of years, some researchers have (2018) GoogLeNet and PlantVillage tomato leaf
ResNet (5,550 images) diseases
been using deep learning techniques like convolutional neural networks
Karthik et al. Residual Tomato 3 classes of 98%
for disease detection and classification. Akila and Deepan (2018) have (2020) learning and PlantVillage tomato
proposed a deep learning model, in which they have used Faster Region- Attention (95,999 plant
based Convolutional Neural Network (R-CNN) (Ren et al., 2015), mechanism images) diseases
Region-based Fully Convolutional Network (R-FCN) (Dai et al., 2016), Agarwal et al. CNN Network Tomato 10 classes 91.20%
(2020) PlantVillage of tomato
and Single Shot Detector (SSD) (Liu et al., 2016) for plant disease
(17,500 plant
detection. Their dataset consists of images of fruit plants, vegetable images) diseases
crops, cereal crops, and cash crops. They have applied several image Durmuş et al. AlexNet and Tomato 10 classes 95.65%
augmentation techniques like image rotations, brightness adjustment, (2017) SqueezeNet PlantVillage of tomato
plant
perspective transformation, and affine transformation.
diseases
Adhikari et al. (2018) have proposed a model to detect plant disease Elhassouny and MobileNet Tomato 10 classes 90.3%
automatically, especially for the tomato plant using image processing. Smarandache PlantVillage of tomato
The dataset consisted of images of three types of tomato plant diseases, (2019) plant
including Gray spot, Late Blight, and Bacterial Canker. Some of the diseases
Widiyanto et al. CNN Tomato 5 classes of 96.6%
images were extracted from the Internet, and some were captured from
(2019) PlantVillage tomato
the firm using camera devices. The designed CNN model consisted of 24 plant
convolutional layers and two fully connected layers. The structure is diseases
designed based on the standard YOLO (You Only Look Once) model
(Redmon et al., 2016). The overall accuracy of the model was 89% on
the PlantVillage dataset and 76% when they used their own dataset. of 94.3%. Karthik et al. (2020) have used two deep learning architec­
Fuentes et al. (2017) have proposed a system that is capable of dis­ tures on PlantVillage dataset to detect 3 diseases in tomato plants,
tinguishing nine sorts of maladies and bugs in tomato plants. It detects namely, early blight, late blight, and leaf mold. In the first architecture,
disease class as well as the location of the disease. Images of plants in feed-forward CNN is used along with residual learning and in the second
fields were taken using camera devices. Three detectors, namely R-FCN, architecture, they have used CNN along with the attention mechanism
Faster R-CNN, and SSD were combined with feature extractors like VGG- and residual learning. The attention-based residual CNN architecture
16 and ResNet50 Network. When Faster R-CNN was used with VGG-16 presented a highest accuracy of 98%. Agarwal et al. (2020) have
(Simonyan and Zisserman, 2014), the model had a mAP (mean Average developed a CNN model, consisting of 3 convolutional, 3 max-pooling,
Precision) of 83%, and when it was used with ResNet-50, the model and 2 fully connected layers to detect 10 classes (9 diseases and 1
achieved a mAP of 75.37%. Contrary to it, when SSD was used with healthy) in tomato plants that achieved an overall accuracy of 91.2% on
ResNet-50, the model had a mAP of 82.53%, and when R-FCN was used the PlantVillage dataset. Elhassouny and Smarandache (2019) have
with ResNet-50, the model achieved a mAP of 85.98%. developed a smart mobile application to classify 9 diseases of tomato
Ashqar and Abu-Naser (2018) have developed a CNN model to plant using MobileNet (Howard et al., 2017). The application used 7,176
identify five diseases of tomato plants namely bacterial spot, early tomato leaf images from PlantVillage dataset for the disease classifica­
blight, septoria leaf spot, leaf mold and yellow leaf curl virus. Their tion. They have achieved an accuracy of 90.3%. Widiyanto et al. (2019)
model consisted of four convolutional layers. ReLU (Rectified Linear developed a CNN model to identify 4 diseases in tomato plants namely
Unit) activation function was used, and each convolutional layer was Late blight, Septoria leaf spot, Mosaic virus and Yellowleaf curl virus
followed by a max-pooling layer. They have used 9,000 images from the along with healthy leaves. The author trained the model on 1,000 im­
PlantVillage dataset. Their model achieved an accuracy of 99.84% when ages per class obtained from PlantVillage dataset. The overall classifi­
it was trained on full-color images and an accuracy of 95.54% when it cation accuracy achieved for 5 classes was 96.6%.
was trained on gray-scale images. Zhang et al. (2018) have proposed a Table 1 represents the summary of the different deep learning ap­
CNN model to distinguish 8 types of tomato leaf illness by utilizing proaches that have been used in the literature for the detection and
transfer learning. The pre-trained models used in the study were Alex­ classification of tomato plant diseases.
Net, GoogLeNet, and ResNet. The dataset was taken from an open access
repository (PlantVillage) containing plant images. The model achieved 3. Methodology
the highest accuracy of 97.28% when the ResNet network was used with
SGD (Stochastic Gradient Descent). In this section, we will discuss the proposed method. The Block di­
Similarly, Durmuş et al. (2017) have used AlexNet and SqueezeNet to agram of the proposed method is shown in Fig. 1. The proposed
classify the tomato plant images into 10 classes (9 diseases and 1 approach can be divided into two parts; in the first part, synthetic im­
healthy). On the PlantVillage dataset, the AlexNet model achieved an ages have been generated using C-GAN for data augmentation. In the
accuracy of 95.65%, while the SqueezeNet model achieved an accuracy second part, a pre-trained DenseNet121 model has been fine-tuned on

2
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Fig. 1. Block diagram of the proposed method.

Table 2 Table 3
Summary of C-GAN Discriminator model. Summary of C-GAN Generator model.
Layer Name Type Parameter/Filter size Layer Name Type Parameter/Filter size

Input Input [128,128,3] Input Input [4,4,3]


Embedding Embedding [0,3,25] Dense Dense sigmoid activation
Dense Dense sigmoid activation Embedding Embedding [0,3,25]
Reshape Reshape [64,64,3] Leaky ReLU Leaky ReLU max(0,x)
Concatenate Concatenate [64,64,6] Reshape Reshape [4,4,256]
Leaky ReLU Leaky ReLU max(0,x) Concatenate Concatenate [4,4,259]
Conv2D Conv2D [32 32], [16 16], [8,8], [4 4], Conv2D Conv2D [32 32], [16 16], [8 8], [4 4],
stride=[2 2], padding same stride=[2 2], padding same
Flatten Flatten [0,2048]
Dropout Dropout [0,2048]
concatenate layer is followed by four convolutional layers where each
convolutional layer is followed by a Leaky ReLU layer. The last Leaky
tomato leaf images for disease classification. The detailed description is ReLU layer is followed by a final convolutional layer. The model consists
given in the following subsections: of a total of 1,735,904 trainable parameters. The network layers and
parameters of discriminator model are represented in Table 3. The C-
3.1. C-GAN model as synthetic image generator GAN generator model (G) takes latent points and random noise as input
and generates synthetic images while the work of discriminator model
In order to prevent the network from over-fitting, Conditional (D) is to differentiate between real images and synthetic images gener­
Generative Adversarial Network (C-GAN) (Mirza and Osindero, 2014) ated by the G model. Suppose I represents data and L represents the
can be used as data augmentation technique to enhance the size of additional class label information where I and L are fed as input to the
dataset. In GAN, traditional convolutional layers are applied to form an discriminative function and the input noise distribution in G model is
image matrix from random noise. GAN consists of a discriminator model given by qz (z). The D model tries to maximize the probability of
and a generator model. The work of generator is to produce fake images assigning the labels correctly to the original images as well as synthetic
and the work of discriminator is to distinguish between fake and real images generated by the generator model represented by logD(I|L) while
images. The generator and discriminator train simultaneously and try to the G model tries to minimize the generator loss represented by
outdo each other. The discriminator makes sure that the images gener­ log(1 − D(G(z|L)). The objective function of C-GAN is similar to a two
ated by the generator are as close to the real images as possible. player minimax game and represented as follows:
The C-GAN model (Mirza and Osindero, 2014) consists of two
minmaxV(D, G) = EI∼pdata (I) [logD(I|L)] + Ez∼pz (z) [log(1 − D(G(z|L))] (1)
adversarial networks: the generator model and discriminator model. The G D

C-GAN discriminator model consists of an input layer, an embedding In the proposed approach, we have used C-GAN for generating syn­
layer, and a dense layer followed by an input layer, reshape layer, and thetic tomato leaf images of various diseases. For generating synthetic
concatenate layer, which is in turn followed by four convolutional images from the C-GAN model, we first trained it on the tomato images.
layers. Each convolutional layer is followed by a Leaky ReLU layer. The
Let given tomato disease leaf dataset DS = {(I(n) , L(n) ) }Nn=1 , where I(n)
last Leaky ReLU layer is followed by a flatten layer, a dropout layer, and
represents a given image and L(n) ∈ {0, 1, …, 9}, represents its corre­
a dense layer. The model consists of a total of 771,454 trainable pa­
rameters. The network layers and parameters of discriminator model are sponding label. To train C-GAN, a real tomato image I(n) with corre­
represented in Table 2. sponding label L(n) is given as input to the discriminator model;
The C-GAN generator model consists of input layers and a dense simultaneously, a noise and a label L(n) is given to the generator model.
layer followed by an embedding layer and a Leaky ReLU layer which is Then the generator model generates tomato fake image If and now
in turn followed by dense, reshape and concatenate layers. The generated fake image If is also given to the discriminator model.

3
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Fig. 2. DenseNet Architecture.

learning for tomato plant disease detection from tomato leaf images. The
Table 4
detailed description of DenseNet architecture is given as follows:
Summary of DenseNet121 model.
DenseNet Architecture: DenseNet is a deep architecture in which
Layer Name Output size Parameter/Filter size each layer is connected to every other layer in a feed-forward manner.
Convolution 112 × 112 7 × 7 conv, stride 2 DenseNet differs from ResNet (He et al., 2016) in the manner of a
Pooling 56 × 56 3 × 3 max pool, stride 2 combination of features that take place at a layer before they are passed
Dense block (1) 56 × 56 [1 × 1 conv, 3 × 3 conv × 6] on to the next layers. In ResNet, the features are combined through
Transition layer (1) 56 × 56 1 × 1 conv summation while in DenseNet, the features are combined by concate­
28 × 28 2 × 2 average pool, stride 2 nating them. Other convolutional networks have l connections for l
Dense block (2) 28 × 28 [1 × 1 conv, 3 × 3 conv × 12] layers whereas DenseNet has l(l + 1)/2 connections for l layers. In
Transition layer (2) 28 × 28 1 × 1 conv DenseNet, inputs of each layer are the feature-maps of previous layers,
14 × 14 2 × 2 average pool, stride 2 and subsequently, the feature-maps of a layer are used as inputs in the
Dense block (3) 14 × 14 [1 × 1 conv, 3 × 3 conv × 24] succeeding layers (Huang et al., 2017).
Transition layer (3) 14 × 14 1 × 1 conv The architecture of DenseNet121 model is represented in Fig. 2 and
7× 7 2 × 2 average pool, stride 2
the description of different layers of the model is represented in Table 4.
Dense block (4) 7× 7 [1 × 1 conv, 3 × 3 conv × 16]
It consists of 4 dense blocks that take 224 × 224 pixels image as input.
Additional convolution layer 7× 7 [3 × 3 conv, 3 × 3 conv]
The first convolution layer consists of 2000 convolutions of size 7 × 7
Classification layer 1× 1 7 × 7 global average pool
with stride 2, which is followed by a 3 × 3 max pooling layer with stride
10D fully connected, softmax
2. The pooling layer is followed by three dense blocks with each dense
block being followed by a transition layer. The fourth dense block is
Thereafter, the discriminator tries to distinguish between fake and real followed by a classification layer. The dense blocks consist of batch
images. This way, C-GAN has been trained on the tomato leaf images. normalization, ReLU and convolutional layers. The transition layer
After successful completion of the training of C-GAN, we have obtained consists of 1 × 1 convolution layer followed by 2 × 2 average pooling
synthetic images of tomato leaves corresponding to each disease layer with stride 2.
category. DenseNet Fine-tuning: In order to fine-tune the pre-trained Den­
seNet model on the tomato leaf images, the top Fully-connected layer,
and Softmax layer have been removed from the network, and two new
3.2. Tomato plant disease detection with DenseNet model Convolutional layers with ReLU activation function, an Average pooling
layer, a Fully-connected layer, and a Softmax layer have been added in
The pre-trained Dense Convolutional Network (DenseNet) has been the network. The weights of the first 410 layers have been initialized
trained on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets (Huang with pre-trained imagenet weights, and the remaining layers’ weights
et al., 2017). It has achieved promising results for object recognition have been learned during the training. The model has been trained for
tasks. In this study, we used the DenseNet121 model using transfer 100 epochs with a learning rate of 0.0001 and a batch size of 32, with
the weights being updated using Adam’s optimizer (Kingma and Ba,
Table 5 2014). We have experimentally found these are the best suitable values
Class-wise image distribution of the tomato PlantVillage dataset. of hyperparameter for our application. For the training of the DenseNet
model, generated synthetic images of tomato leaves and images from the
Category Number of images
tomato PlantVillage dataset (Hughes et al., 2015) have been combinedly
Tomato Yellow Leaf Curl Virus (YLCV) 3209
used.
Tomato Bacterial Spot (Bctsp) 2127
Tomato Late Blight (TLB) 1909
Tomato Septoria leaf spot (SptL) 1771 4. Implementation and result
Tomato Two Spotted Spider Mite (SpdM) 1676
Tomato Target Spot (TTS) 1404 This section presents the implementation details of the proposed
Tomato Early Blight (TEB) 1000
method and the results analysis. Section 4.1 contains description of the
Tomato Leaf Mold (LMld) 952
Tomato Mosaic Virus (MscV) 373 dataset, followed by experiment setup represented in Section 4.2. The
Tomato healthy (Hlth) 1591 results of the proposed method and performance comparison with
Total 16012 existing methodologies is presented in Section 4.3.

4
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Fig. 3. Tomato original leaf images and synthetic


leaf images generated by the C-GAN model: here
row 1 represents the original tomato leaf images of
five classes (Yellow Leaf Curl Virus, Bacterial Spot,
Late Blight, Septoria leaf spot and Two Spotted
Spider Mite), row 2 and row 3 represent synthetic
images of corresponding classes; row 4 represents
the original tomato leaf images of next five classes
(Target Spot, Early Blight, Leaf Mold, Mosaic Virus
and Healthy), row 5 and row 6 represent the syn­
thetic images of respective classes.

4.1. Dataset training set for 300 epochs to generate synthetic leaf images of tomato
for each of the ten categories. The weights of the generator and
In order to evaluate the performance of the proposed approach, a discriminator model were updated after each epoch to generate syn­
publicly available tomato PlantVillage dataset (Hughes et al., 2015) has thetic images as close to real images as possible. At the end of the
been used in the experiments. PlantVillage dataset consists of 16,012 training of the network, we have generated 4,000 synthetic images of
images of tomato plant leaves of ten classes. Out of ten classes, nine tomato leaves from the C-GAN model. In the second set of experiments,
categories are of tomato plant leaf diseases, and one category is for the pre-trained DenseNet model has been trained on the original training
healthy leaves. The classes of the PlantVillage dataset and class-wise set as well as on the combination of the training set (from the Plant­
image distribution is represented in Table 5. We have used abbrevia­ Village) and generated synthetic tomato leaf images.
tions of class names, which are also shown in Table 5. All the images of To evaluate the performance of the proposed approach, we have used
the dataset have been resized to 224 × 224 for faster computations. We various performance metrics like accuracy, precision, recall, F1-score,
have divided the dataset into training set, validation set and test set in and confusion matrix.
the ratio of 60:10:30 with the consideration of no overlapping between
TP + TN
the three sets. The training set and the validation set have been used Accuracy = (2)
TP + TN + FP + FN
during the network training, and the test set has been used for perfor­
mance assessment of the model. TP
Precision = (3)
TP + FP
4.2. Experiment setup and evaluation metrics
TP
Recall = (4)
In this study, two sets of experiments have been performed. In the TP + FN
first set of experiments, the C-GAN model has been trained on the

5
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Table 6 Table 7
PSNR values between original images and between original and GAN images for Inception score of original images and C-GAN images for 10
10 classes. classes.
Class PSNR b/w original PSNR b/w original and C-GAN Image type Mean Inception score
images (dB) images (dB)
Original images 2.558 ± 0.189
Yellow leaf curl 28.047 28.117 C-GAN images 2.835 ± 0.228
virus
Septoria leaf spot 27.894 28.320
Two spotted spider 28.013 28.505
mite
Table 8
Bacterial spot 27.987 28.304
Accuracy, Precision, Recall, and F1-score for the model with and without
Leaf mold 27.901 27.924
Mosaic virus 27.894 27.920 augmentation.
Target spot 27.919 28.026 Model Accuracy Precision Recall F1-
Early blight 28.294 27.895 score
Late blight 27.861 27.749
Healthy 27.897 28.741 5 classes
DenseNet121 98.16% 0.83 0.97 0.98
Mean 27.970 28.150 DenseNet121 + Synthetic images 99.51% 0.99 0.99 0.99

7 classes
DenseNet121 95.08% 0.94 0.94 0.94
2*TP DenseNet121 + Synthetic images 98.65% 0.98 0.99 0.98
F1 − score = (5)
2*TP + FP + FN 10 classes
DenseNet121 94.34% 0.95 0.92 0.93
where TP = True Positive, TN = True Negative, FP = False Positive and DenseNet121 + Synthetic images 97.11% 0.97 0.97 0.97
FN = False Negative.
In order to evaluate the quality of the synthetic images generated by
C-GAN, we have used image quality measures like Peak Signal-to-Noise
Table 9
Ratio (PSNR) and Inception Score (IS). Precision, Recall and F1-score for different disease classes of tomato plant.
PSNR = 20.log10 (MAX I ) − 10.log10 (MSE) (6) Class Precision Recall F1-score

Yellow leaf curl virus 0.97 0.97 0.97


m− 1 ∑
1 ∑ n− 1
Septoria leaf spot 0.97 0.98 0.97
MSE = [I(i, j) − K(i, j)]2 (7)
mn 0 0 Two spotted spider mite 1.00 0.97 0.98
Bacterial spot 1.00 0.97 0.98
Leaf mold 0.96 0.97 0.96
where MAXI is the maximum possible pixel value of the image I and MSE Mosaic virus 0.89 0.98 0.93
is the mean squared error. Target spot 1.00 0.99 0.99
Inception score uses the pre-trained Inception-v3 (Szegedy et al., Early blight 0.94 0.99 0.97
2016) model to classify the generated images. The model is used to Late blight 0.99 1.00 0.99
Healthy 0.95 0.87 0.91
classify a large number of generated images by predicting the proba­
bility of the image belonging to each class. These predictions are sum­ Mean 0.97 0.97 0.97
med up into the inception score. The Kullback–Leibler (KL) divergence is
calculated for each image as the conditional probability multiplied by
original images as well as on the C-GAN synthetic images, which are
the log of the conditional probability minus the log of the marginal
represented in Table 7. The inception score of synthetic images is also
probability. The KL divergence is then summed over all images and
very similar to the score of original images.
averaged over all classes and the exponent of the result is calculated to
give the final inception score.
4.3.2. Performance of classification model
IS = exp(Ey∼pg DKL (p(z|y)‖p(z))) (8) We have evaluated the performance of proposed model for 5-class
classification (YLCV, SptL, MscV, TLB and Hlth classes), 7-class classi­
where y ∼ pg denotes that y is an image sampled from g, p(z|y) is con­ fication (YLCV, SpdM, LMld, MscV, TTS, TLB and Hlth classes) and 10-
ditional class distribution, p(z) is marginal class distribution and class classification (YLCV, SptL, SpdM, Bctsp, LMld, MscV, TTS, TEB,
DKL (p(z|y)‖p(z)) is the KL divergence between the two distributions. TLB and Hlth classes) tasks. Table 8 represents the classification per­
formance of the proposed method on the PlantVillage dataset and
augmented dataset (PlantVillage + Synthetic images dataset). The pro­
4.3. Results and discussion posed method achieved a classification accuracy of 99.51%, 98.65%,
and 97.11% for 5-class classification, 7-class, and 10-class classification
This section presents performance evaluation of C-GAN model and tasks, respectively. It can be observed from Table 8, DenseNet121 model
classification model. with synthetic images have shown performance improvement in accu­
racy, precision, recall, and F1-score for all classes, as compared to the
4.3.1. Performance of C-GAN original dataset. This improvement in classification performance clearly
The qualitative performance of C-GAN can be visually examined in indicates that data augmentation using the C-GAN model has prevented
Fig. 3, where it shows the original tomato leaf images from the dataset as the network from over-fitting and helped the network to be more
well as the synthetic tomato leaf images generated by the C-GAN model. generalized. The class-wise performance of the proposed approach for
It can be seen in Fig. 3 that the generated synthetic images are looking 10-class classification task on augmented dataset with synthetic images
closely similar to the real images. The PSNR values between original is represented in Table 9.
images and between original and synthetic images is represented in The confusion matrix and Receiver Operating Characteristic (ROC)
Table 6. The comparable values of PSNR show that the quality of images Curves of the proposed method for 5-class, 7-class and 10-class classi­
generated by C-GAN is very close to the quality of real images. The mean fication are shown in Fig. 4 and Fig. 5, respectively. The proposed
and standard deviation of inception scores have been calculated on

6
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Fig. 4. Confusion matrix of tomato plant disease classification using proposed method.

Fig. 5. Receiver Operating Characteristic (ROC) curves of tomato plant disease classification using proposed method.

7
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Table 10 model is fine-tuned on the original tomato leaf images and synthetic
Comparison of performances of different Pre-trained Networks. images. The proposed data augmentation technique improves network
Method(s) PlantVillage PlantVillage þ Synthetic images generalizability and prevents it from the over-fitting problem. The
proposed model achieves an accuracy of 98.16%, 95.08%, 94.34%, on
VGG19 89.60 90.90
ResNet50 76.90 80.00 the original PlantVillage dataset for 5-class classification, 7-class, and
Inception-V3 82.10 84.90 10-class classification tasks, respectively, and it achieves an accuracy of
Xception 92.20 95.80 99.51%, 98.65%, 97.11% with the original PlantVillage + synthetic
MobileNet 91.90 94.60 images dataset for 5-class classification, 7-class, and 10-class classifi­
DenseNet169 93.03 95.15
DenseNet201 93.71 95.86
cation tasks, respectively. In addition, our experiment results show the
DenseNet121 94.34 97.11 proposed method’s superiority over the existing methods.
In future work, we wanted to extend this method for disease iden­
tification and classification to various parts of the plant, like fruits,
stems, and branches. We also wanted to identify the different phases of
Table 11
Comparison of result with existing models. the plant disease.

Classification Author (s) Method(s) Accuracy


task CRediT authorship contribution statement

5 Classes Widiyanto et al. (2019) CNN model 96.60%


Proposed DenseNet, C-GAN 99.51%
Amreen Abbas: Methodology, Software, Writing - original draft.
7 Classes Rangarajan et al. (2018) AlexNet, VGG16 97.49% Sweta Jain: Supervision, Writing - review & editing, Investigation.
Proposed DenseNet, C-GAN 98.65% Mahesh Gour: Conceptualization, Methodology, Software, Writing -
10 Classes Agarwal et al. (2020) CNN network 91.20% review & editing, Visualization. Swetha Vankudothu: Software,
Durmuş et al. (2017) AlexNet and 95.65%
Writing - original draft.
SqueezeNet
Elhassouny and MobileNet 90.30%
Smarandache (2019)
Proposed DenseNet, C-GAN 97.11%
Declaration of Competing Interest

The authors declare that they have no known competing financial


method produces very little false positive and false negative, as observed interests or personal relationships that could have appeared to influence
from the confusion matrix. the work reported in this paper.

4.3.3. Performance comparison References


Performance comparison of the proposed DenseNet121 model with
pre-trained networks, namely VGG19 (Krizhevsky et al., 2012), Abdel-Hamid, O., Mohamed, A.-R., Jiang, H., Deng, L., Penn, G., Yu, D., 2014.
Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio,
ResNet50, Inception-V3 (Szegedy et al., 2016), Xception (Chollet,
Speech, Language Process. 22 (10), 1533–1545.
2017), and MobileNet (Howard et al., 2020) for 10-class classification, is Adhikari, S., Saban Kumar, K., Balkumari, L., Shrestha, B., Baiju, B., 2018. Tomato plant
shown in Table 10. All the models were trained first on the original diseases detection system using image processing. In: 1st KEC Conference on
PlantVillage dataset and then on augmented dataset consisting of syn­ Engineering and Technology, Lalitpur, vol. 1, pp. 81–86.
Agarwal, M., Singh, A., Arjaria, S., Sinha, A., Gupta, S., 2020. Toled: Tomato leaf disease
thetic images generated by the C-GAN model. The DenseNet121 model detection using convolution neural network. Procedia Comput. Sci. 167, 293–301.
gave the best results among all the pre-trained models with an accuracy Akila, M., Deepan, P., 2018. Detection and classification of plant leaf diseases by using
of 97.11% on the augmented dataset. The results of all the models deep learning algorithm. Int. J. Eng. Res. Technol 6, 2–7.
Al-Qizwini, M., Barjasteh, I., Al-Qassab, H., Radha, H., 2017. Deep learning algorithm for
clearly depict that an augmented dataset consisting of synthetic images autonomous driving using googlenet. In: 2017 IEEE Intelligent Vehicles Symposium
generated by the C-GAN model has given better accuracy as compared to (IV), IEEE, 2017, pp. 89–96.
the original dataset. Ashqar, B.A., Abu-Naser, S.S., 2018. Image-based tomato leaves diseases detection using
deep learning.
In the past, several studies have been proposed for the tomato plant Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. In:
disease detection from tomato leaves images. We have compared our Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
proposed method with some of the studies that are proposed on the pp. 1251–1258.
Dai, J., Li, Y., He, K., Sun, J., 2016. R-fcn: Object detection via region-based fully
tomato PlantVillage dataset. The performance comparison with existing
convolutional networks. In: Advances in Neural Information Processing Systems, pp.
methods is represented in Table 11. The studies presented in the table 379–387.
are mainly developed for three kinds of classification tasks, such as 5- Durmuş, H., Güneş, E.O., Kırcı, M., 2017. Disease detection on the leaves of the tomato
plants by using deep learning. In: 2017 6th International Conference on Agro-
class, 7-class and 10-class classification. For 5 class classification task,
Geoinformatics. IEEE, pp. 1–5.
the performance of our approach is better than the performance in Elhassouny, A., Smarandache, F., 2019. Smart mobile application to recognize tomato
(Widiyanto et al., 2019). Likewise for 7 class classification task, the leaf diseases using convolutional neural networks. In: 2019 International Conference
performance of our approach is comparable to the performance ach­ of Computer Science and Renewable Energies (ICCSRE), IEEE, pp. 1–4.
Fuentes, A., Yoon, S., Kim, S.C., Park, D.S., 2017. A robust deep-learning-based detector
ieved in (Rangarajan et al., 2018). Similarly, for 10 class classification for real-time tomato plant diseases and pests recognition. Sensors 17 (9), 2022.
task, the proposed method has shown its superiority over the methods in Gour, M., Jain, S., 2020. Stacked convolutional neural network for diagnosis of covid-19
(Agarwal et al., 2020; Durmuş et al., 2017; Elhassouny and Smarand­ disease from x-ray images, arXiv preprint arXiv:2006.13817.
Gour, M., Jain, S., Agrawal, R., 2019. Deeprnnetseg: Deep residual neural network for
ache, 2019). nuclei segmentation on breast cancer histopathological images. In: International
Conference on Computer Vision and Image Processing. Springer, pp. 243–253.
5. Conclusion Gour, M., Jain, S., Sunil Kumar, T., 2020. Residual learning based cnn for breast cancer
histopathological image classification. Int. J. Imaging Syst. Technol.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In:
Deep learning-based approaches have shown promising results in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
plant disease detection. In this study, we have proposed yet another pp. 770–778. https://doi.org/10.1109/CVPR.2016.90.
Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto,
deep learning-based method for tomato plant disease detection from
M., Adam, H., 2017. Mobilenets: Efficient convolutional neural networks for mobile
tomato leaf images. In the proposed approach, synthetic images have vision applications, arXiv preprint arXiv:1704.04861.
been generated using Conditional Generative Adversarial Networks for Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto,
M., Adam, H., 2020. Mobilenets: Efficient convolutional neural networks for mobile
data augmentation purposes. Thereafter a pre-trained DenseNet121
vision applications, arXiv preprint arXiv:1704.04861.

8
A. Abbas et al. Computers and Electronics in Agriculture 187 (2021) 106279

Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q., 2017. Densely connected Rangarajan, A.K., Purushothaman, R., Ramesh, A., 2018. Tomato crop disease
convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision classification using pre-trained deep learning algorithm. Procedia Comput. Sci. 133,
and Pattern Recognition, pp. 4700–4708. 1040–1047.
Hughes, D., Salathé, M., et al., 2015. An open access repository of images on plant health Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified,
to enable the development of mobile disease diagnostics, arXiv preprint arXiv: real-time object detection, in. In: Proceedings of the IEEE Conference on Computer
1511.08060. Vision and Pattern Recognition, pp. 779–788.
Karthik, R., Hariharan, M., Anand, S., Mathikshara, P., Johnson, A., Menaka, R., 2020. Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster r-cnn: Towards real-time object
Attention embedded residual cnn for disease detection in tomato leaves. Appl. Soft detection with region proposal networks. In: Advances in Neural Information
Comput. 86, 105933. Processing Systems, pp. 91–99.
Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization, arXiv preprint Savary, S., Ficke, A., Aubertot, J.-N., Hollier, C., 2012. Crop losses due to diseases and
arXiv:1412.6980. their implications for global food production losses and food security.
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale
convolutional neural networks. In: Advances in Neural Information Processing image recognition, arXiv preprint arXiv:1409.1556.
Systems, pp. 1097–1105. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep inception architecture for computer vision, in. In: Proceedings of the IEEE
convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Conference on Computer Vision and Pattern Recognition, pp. 2818–2826.
Weinberger, K.Q. (Eds.), Advances in Neural Information Processing Systems 25. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the
Curran Associates, Inc., pp. 1097–1105 inception architecture for computer vision, in. In: Proceedings of the IEEE
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C., 2016. Ssd: Conference on Computer Vision and Pattern Recognition, pp. 2818–2826.
Single shot multibox detector. In: European Conference on Computer Vision. Widiyanto, S., Fitrianto, R., Wardani, D.T., 2019. Implementation of convolutional
Springer, pp. 21–37. neural network method for classification of diseases in tomato leaves. In: 2019
Mirza, M., Osindero, S., 2014. Conditional generative adversarial nets, arXiv preprint Fourth International Conference on Informatics and Computing (ICIC). IEEE,
arXiv:1411.1784. pp. 1–5.
Zhang, K., Wu, Q., Liu, A., Meng, X., 2018. Can deep learning identify tomato leaf
disease? Adv. Multimedia.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy