Detection of Plant Leaf Diseases Using Deep Convolutional Neural Network Models
Detection of Plant Leaf Diseases Using Deep Convolutional Neural Network Models
https://doi.org/10.1007/s11042-023-18099-3
Abstract
Food demand is exponentially increasing due to the increase in population in every coun-
try; hence, increasing the yield is one of the focus areas for sustainable agricultural devel-
opment. Predicting plant disease is one of the measures to increase crop yield and quality,
thereby increasing the economy. The present work aims to develop a web-based application
built with a deep learning model to detect plant leaf disease using a leaf image and alert
farmers with messages. A comparative study was conducted on the data of the PlantVillage
dataset for binary and multiclass classifications. Various deep convolutional neural network
(CNN) models, such as MobileNet, DenseNet201, ResNet50, Inception V3, and visual
geometry group (VGG) 16 and 19, have been compared with a proposed model. Various
metrics include precision, recall, classification report, Confusion Matrix, and accuracy.
MobileNet is influential among the selected models, with an accuracy of 97.35% and preci-
sion and recall of 0.973 each for multiclass classification. The proposed model achieved
an accuracy of 99.39% with a loss of 0.0361, precision of 0.989, and a recall of 0.984 for
binary classification compared with deep CNN models. A web-based application was cre-
ated using the MobileNet model for the convenience of sending an email alert to the user
regarding plant disease. The research results help improve a country’s crop productivity
and the overall economy through prompt and precise decision-making on crop diseases.
* Vijaya Kalavakonda
vijayak@srmist.edu.in
Puja Singla
ps2779@srmist.edu.in
Ramalingam Senthil
senthilr@srmist.edu.in
1
Department of Computing Technologies, SRM Institute of Science and Technology,
Kattankulathur, Chennai, India
2
Department of Mechanical Engineering, SRM Institute of Science and Technology,
Kattankulathur, Chennai, India
13
Vol.:(0123456789)
64534 Multimedia Tools and Applications (2024) 83:64533–64549
1 Introduction
Agriculture plays a vital role in every country supplying food. Biodiversity is well
known for its availability of various crop species. As most of the population depends on
agriculture for a livelihood, it is vital to prevent the loss of crop production due to plant
diseases. Recent technological developments have made it possible to produce enough
food to meet the demands of the ever-increasing population. Food security remains chal-
lenged by plant diseases and other unexpected natural calamities [1, 2]. Statistics show
that around one-third of annual crop yield loss in India is due to plant diseases; hence, if
the disease outbreak is identified at the right time, loss of yield due to disease could be
alleviated.
Predicting plant disease remains challenging due to a lack of laboratory competence
and expertise. People can identify plant diseases and outbreaks early and take swift reme-
dial measures using automated detection technologies [3–5]. Several research studies use
machine learning and neural networks to prevent crop loss due to diseases [6–10].
The methods used for disease detection using leaf images could acquire leaf images,
extract features from the images, and use the extracted features to classify leaf images as
either diseased or not [11–14]. Some machine learning techniques proven to help detect
plant diseases are support vector machine, K-nearest neighbor, multiple linear regres-
sion, decision tree, random forest, naive Bayes, logistic regression, artificial neural net-
works, deep CNN, and fuzzy logic [15–20].
Cristin et al. [21] proposed an image-processing method for plant disease identifica-
tion by removing noise and detecting artefacts during preprocessing. Piecewise fuzzy
C-means clustering was used to segment the preprocessed images. The segmented
images extract texture features like information gain, histograms of oriented gradients,
and entropy. The extracted texture features were used to classify by using the belief net-
work. The experimental results showed that the rider and cuckoo algorithm on a deep
belief network outperformed other existing methods. The attained accuracy, sensitivity,
and specificity were 0.877, 0.862, and 0.877, respectively.
Mohanty et al. [22] used a publicly available dataset with 54,306 images to train a
deep CNN for classifying 26 diseases of 14 crop species. They investigated the data-
set using deep learning models to develop a smartphone-assisted plant disease diagno-
sis. They focused on evaluating two popular architectures, AlexNet and GoogLeNet, on
their chosen dataset. They analyzed the performances of AlexNet and GoogLeNet with
models being trained from scratch and adapted the transfer learning method of training
using the selected dataset. Their results show an overall accuracy varying from 85.53%
(AlexNet::TrainingFromScratch) to 99.34% (GoogLeNet:: Transfer Learning). Their
results indicate that the deep learning approach could classify or predict plant disease
using a plant leaf image dataset. All results were reported by considering the classifica-
tion based on the crop species and the associated disease status.
Chen et al. [23] proposed a method to solve few-shot plant disease recognition using
local feature-matching conditional neural adaptive processes. They used a high-diversity
meta-dataset to train their models to detect unseen plant diseases. Compared to six other
models, it is claimed to have achieved an average accuracy of 93.9% in detecting unseen
plant disease. In contrast, the accuracy of other models ranged from 19% to 92.5%. How-
ever, they used only 25 annotated samples, validated their models using the PlantVillage
dataset, and compared their results with two other versions of ResNet18. The local feature
mapping was better than the fully connected version, with an attained accuracy of 89%.
13
Multimedia Tools and Applications (2024) 83:64533–64549 64535
Hassan et al. [24] implemented deep CNN models to identify and diagnose plant leaf
diseases in three versions of the PlantVillage dataset. The first used the color version of
the images, the second one was grayscale, and the third was a segmented leaf image. The
images were split into training and testing sets; the split was done in three different man-
ners 80–20, 70–30, and 60–40. They have evaluated InceptionV3, InceptionResNetV2,
MobileNetV2 and EfficientNetB0 and reported testing accuracy of 98.42%, 99.11%,
97.02%, and 99.56%, respectively, with 80–20 split of training and testing. They reported
99.78%, the highest accuracy attained by EfficientNetB0 with 80–20 split on segmented
images. The MobileNetV2 architecture trained with 70–30 split and used grayscale image
had the least accuracy of 93.21%. The MobileNetV2 architecture was even compatible with
mobile devices with optimized parameters.
Singh et al. [25] reviewed various disease identification techniques and summarized dif-
ferent imaging methods to detect plant diseases early. The techniques represented majorly
were thermal, hyperspectral, fluorescence, Multispectral, and 3D imaging. The practical
techniques for the early detection of plant diseases and classification were support vector
machine, k-Nearest neighbour (kNN), K-means clustering and deep learning.
Zhang et al. [26] discussed algorithms and methods from disease detection to qualita-
tive and quantitative evaluation based on changes in parameters like pigment, structure,
and water in disease diagnosis. They stressed different pathogens identification, biotic and
abiotic stress discrimination, early warning of plant disease, and satellite-based hyperspec-
tral technology. They concluded that accurate large-scale data analysis is necessary to use
the data for practical applications using the multi-source data trend. Kaur et al. [27] studied
grape plant diseases of leaf blight, black rot, stable, and black measles. They proposed a
hybrid CNN that fine-tuned EfficientNet B7 using the preprocessed plant disease dataset
to augment the data with 98.7% accuracy. They stated that the transfer learning model was
more mature than the model trained.
Singh et al. [28] proposed a deep neural network for identifying and classifying maize
leaf diseases. The AlexNet model has eight layers, five convolution and three max-pooling
layers. They studied various epochs and reported that accuracy increases with several itera-
tions and reaches a considerable value when the epoch is 25. They have compared the per-
formance of AlexNet, visual geometry group (VGG), artificial neural networks and support
vector machine and indicated that AlexNet performs better than the other models on the
PlantVillage dataset for maize disease classification.
The PlantVillage dataset, most explored by researchers, consists of images for 14 plant
species, with 54 k images belonging to 38 different classes. Several researchers used this
benchmark dataset for all species to characterize plant diseases by considering diseases
independent of crops [1, 29–31]. The leaf images of tomato [32, 33], cotton [34], potato
[35], and cassava leaves [36, 37] were analyzed using machine learning to analyze plant
diseases. Geetharamani and Pandian [38] used the PlantVillage dataset and they aug-
mented it using principal component analysis color augmentation, rotation, image flipping,
noise injection, gamma correction, and scaling with a nine-layer deep CNN having 96.46%
accuracy. Rosmala et al. [39] studied the Plant Village dataset with 2700 training and 300
validation data for 100 epoch iterations using the transfer learning method with the VGG16
and InceptionV3 models. The Inception-V3 model with tuning inception module showed
the best performance with an average precision score of 0.93, an average recall of 0.92, an
average F1 score of 0.92, and an average accuracy of 92%.
Tassis et al. [40] detected lesions of the coffee tree from in-field images collected via
smartphone using deep CNN and showed the suitability for implementation on an embed-
ded mobile platform. Tiwari et al. [41] trained a dense CNN architecture on a plant leaves
13
64536 Multimedia Tools and Applications (2024) 83:64533–64549
image dataset of six crops in 27 categories. Five-fold cross-validation on unseen data was
done to evaluate the trained model and test with good accuracy of 99.58% and 99.199%,
respectively. Later, the same study was extended to classify the medicinal plant leaves with
multiclass classification using deep CNN [42]. Ennouni et al. [43] investigated a hybrid
approach combining partial differential equations-based image decomposition, segmenta-
tion, feature extraction, features selection and classification with a classification accuracy
of 95.9%. The built web application is helpful for plant disease detection and the conveni-
ence of sending emails to users [44–46]. The repeatability and reproducibility of the codes
and models are essential in real-time applications [47–50]. The native language interpre-
tations are essential to reach the farmers and disseminate essential information on plant
diseases and remedies [51, 52]. It is observed that MobileNet gives the best results for the
multiclass classification.
From the literature, several researchers investigated different deep CNN methodologies
to identify the leaves and the plant leaf diseases with compatibility with a mobile platform.
The research gap in this domain is a lack of automatic cluster center initialization and prac-
tical algorithms working on the leaf images captured under different environmental con-
ditions, automatically detecting leaf disease without much human intervention, and com-
municating the condition over mobile networks. Several mechanized image plant detection
and characterization methods exist, but this discovery ground is deficient. Most of the
related research for disease detection in plants using image-processing techniques focused
on individual plants. Multiclass classification of plant disease detection has been limited to
fewer plant species.
The main contribution of the present work is as follows.
• This work has been carried out to analyze the performance of a model by performing
two classes as well as 38 class classifications on the PlantVillage dataset to provide
more robust data on the detection of plant leaf diseases.
• Binary and multiclass classification methods are studied for plant leaf disease detection
and compared with state-of-the-art methods.
• This present work involved developing a web-based application built with a deep learn-
ing model to detect plant leaf disease using a leaf image and alerting farmers with mes-
sages.
The present work is structured in the following ways. Section 1 details the background
information and the need for the study. Section 2 discussed the research methods adopted
in the present study. Section 3 details the results and the discussion deals with the signifi-
cant results. Section 4 summarizes the major findings and sheds light on future works.
The CNN algorithm has been used for disease detection in plants due to better recogni-
tion of patterns, specifically images. The CNN algorithm is implemented in Python three
language in Jupyter Notebook. For the entire implementation, the TensorFlow back-
end is used. This research uses Jupyter-Notebook 6.3.0, Google Colab Pro Notebook
2022, and Visual Studio code 1.67.2. Hardware details are 11th Gen Intel(R) Core (TM)
i7-1165G7 processor, installed RAM capacity of 12.0 GB DDR4, 64-bit operating system,
13
Multimedia Tools and Applications (2024) 83:64533–64549 64537
and × 64-based processor. Figure 1 shows the proposed models’ architectures for binary
and multiclass classifications.
2.1 Acquisition of data
The dataset used for this work is a version of the PlantVillage dataset consisting of 48,798
images, of which 15,084 are healthy leaves, and the remaining 33,714 are unhealthy (dis-
eased leaves). The dataset has images categorized into 33 classes, indicating species and a
specific disease; the dataset is processed to perform two-class classification: unhealthy and
healthy. The processed data is split into training, validation, and testing sets. The reduced
image size of 256 × 256 pixels is subjected to model optimizations and predictions. 62.6%
of the samples are used to train the model. 19.9% of the samples are used to validate the
model. 17.5% of samples are used to test the model.
2.2 Model training
In this proposed model, the Sequential model from Keras is used to provide a single input
as an input image to obtain a single classified output saying whether the plant is healthy
or not in case of binary classification and to determine the plant’s disease in case of mul-
ticlass classification. Sequential models are appropriate to use for a linear stack of lay-
ers or to obtain a single output for a single input. Models and layers are imported from
Keras to build the CNN architecture. Figure 2 shows the addition of layers using the add
method in a sequence for binary and multiclass classifications. The binary classification
has one output unit. The multiclass classification has 38 output units. The proposed model
is instantiated as a Sequential model followed by the addition of 12 layers, which includes
4 Convolutional layers with varying filters such as 32, 64, 128 and 4 Max pooling layers
used alternatively followed by 1 flatten layer, 1 dense layer with activation function Relu,
1 dropout layer at rate of 0.5, 1 final dense layer that determines the output for our model.
The final dense layer differs in the case of binary and multiclass classification. In binary
classification, one output unit has a sigmoid activation function; in multiclass classifica-
tion, 38 output units have a SoftMax activation function. All the layers are stacked together
to form the entire network. The primary difference between binary and multiclass classifi-
cation models is the number of output units in the final layer and the choice of activation
13
64538 Multimedia Tools and Applications (2024) 83:64533–64549
function. The final layer calculates the probability of belonging to each class and ensures
the sum of probabilities across all classes equals 1.
Afterwards,.summary method of Keras is used to see the output shape of each layer.
Then,.compile method with optimizers, loss function and metrics is used to compile the
model before training. In the proposed model, RMSProp optimizer is used for both binary
and multiclass classification. Regarding loss function, binary classification uses binary
_crossentropy and multiclass classification uses categorical_crossentropy. While creating
a sequential model for both binary and multi-class classification, the layer structure and
configuration should be tailored to the specific task. Adding layer structures is the same for
both but they have different output layer structures and loss functions. The model’s design
is crucial in achieving good performance and often requires experimentation and tuning.
The model training by using model is the initial step. The fit method is used for about 30
epochs with steps per epoch as 956. The model is saved in.h5 format. The graph is plotted
using matplotlib for each increasing number of epochs of training loss versus validation
loss and training accuracy versus validation accuracy. Since the validation loss decreases
with training loss and validation accuracy increases with training accuracy, the model is
not experiencing overfitting or underfitting. Figure 3 shows the learning curve for binary
classification. Figure 4 shows the learning curve for multiclass classification. The learning
graph was obtained with a testing loss of 0.215% and an accuracy of 94.45%.
2.3 Performance metrics
The performance of the various classification algorithms has been analyzed using evalua-
tion metrics. The four quadrants of Confusion Matrix are utilized to calculate these perfor-
mance metrics.
• True Positive (TP): The result when the algorithm’s prediction of the positive class is
correct.
• True Negative (TN): The result when the algorithm’s prediction of the negative class is
correct.
13
Multimedia Tools and Applications (2024) 83:64533–64549 64539
• False Positive (FP): The result when the algorithm’s prediction of the positive class
is incorrect as the actual class is negative.
• False Negative (FN): The result when the algorithm’s prediction of a negative class
is incorrect as the actual class is positive.
Accuracy is the percentage of right predictions made by the model, as per Eq. (1).
TP + TN
Accuracy = (1)
FP + FN + TN + TP
Recall is the percentage of precisely prophesied positive observations made by the
model to the number of observations that belong to the positive class, as per Eq. (2),
TP
Recall = (2)
TP + FN
Precision is the percentage of positive observations prophesied correctly to the algo-
rithm’s total number of observations predicted as positive., as per Eq. (3)
13
64540 Multimedia Tools and Applications (2024) 83:64533–64549
TP
Precision = (3)
TP + FP
F1 score is a type of average involving the reciprocal of precision and recall, thus con-
veying the balance between the two metrics, as per Eq. (4),
(Precision + Recall)
F1score = 2 ∗ (4)
Precision + Recall
Receiver operating characteristics curve is a performance measurement of the classifica-
tion model.
The test images directory evaluates the model on unseen data. Keras provides an evalua-
tion method to know the loss and accuracy. Testing loss is 0.0361 and testing accuracy is
99.39%. The saved model is imported using Keras libraries. The image loaded from the
testing folder and converted into array. The prediction model classifies whether the plant is
healthy or unhealthy. The architecture of the proposed model is used for multiclass classi-
fication with Plant Village dataset of 54 k images and 38 classes of different plant diseases
of 14 plants. Table 1 shows the dataset description used for multiclass classification. The
optimizer and loss function used by the model are “RMSProp” and “categorical_crossen-
tropy” respectively.
Transfer learning is the deep learning concept using pre-trained CNN models trained on
millions of data. This approach allows us to use those models and retrain as per require-
ments. Plant disease detection is a type of image classification model. Such models were
used on the dataset to compare the best accuracy and performance. All the models are
pre-trained on the ImageNet dataset, which consists of 1.4 M images and 1000 different
categories. For all the models, the top pre-trained layers freeze and retrain the model by
adding the last few layers per the requirement. The training models are VGG16, ResNet50,
VGG19, Inception V3, and ResNet152V2. All configurations run for 30 epochs each, and
almost consistent convergence is observed in the learning rate. Table 2 compares testing
metrics on all pre-trained models and models made from scratch. The model from scratch
showed the highest accuracy of 99.39%, loss of 0.0361, precision of 0.989, and recall
of 0.984 among the selected models. Table 3 shows the comparison of the classification
among the selected models.
3.1 Binary classification
Figure 5 shows the graph obtained for each model (VGG16, ResNet50, VGG19, Inception
V3, ResNet152V2) with each increasing epoch of training loss/accuracy vs validation loss/
accuracy (learning curve) for binary classification.
13
Multimedia Tools and Applications (2024) 83:64533–64549 64541
3.2 Multiclass classification
The classification report and Confusion Matrix were obtained for each class on all the pre-
trained models (MobileNet, Inception V3, DenseNet201, VGG 19, VGG 16, ResNet 50)
13
64542 Multimedia Tools and Applications (2024) 83:64533–64549
and our proposed model. Table 4 shows the comparison of testing metrics. Figure 6 shows
the count of correctly classified images and the number of images incorrectly classified.
3.3 Implementation
It is observed that MobileNet gives the best results for multiclass classification com-
pared to our model because it provides 97.35% accuracy, whereas our proposed model is
94.45% accurate. In misclassified images, MobileNet has performed better on the selected
test dataset. Therefore, the MobileNet model is used for a web application trained on the
selected dataset with a dropout rate of 0.5. After training the model, the MobileNet model
was saved in.h5 format. This format allows saving the model architecture and weights for
future use. For building the web application, the saved model of MobileNet is loaded using
Python and Keras into the Flask app. The predictive method was used on the loaded model
to identify the disease. The front-end part of the web application is built using hypertext
markup language, cascading style sheets and Javascript and the backend part is built using
Flask, a small and lightweight Python web framework. The Flask framework helped con-
nect our deep learning model with the application’s front-end. The feature of sending the
user about the predicted disease is added using the Flask-Mail extension.
The built web application is helpful for plant disease detection and the convenience of
emailing the user. Figure 7 shows the implementation user interface with the results. More
images, deep learning instructions, and training are required to expand the application to
various crops globally. The resolution of images is equally important to identify a plant
disease appropriately. The native language interpretations are essential to reach the farmers
and disseminate information on plant diseases and appropriate remedies.
The proposed methodology results align with the literature [53–59]. Table 5 shows the
comparative results of classification accuracy of the present study with earlier studies.
13
Multimedia Tools and Applications (2024) 83:64533–64549 64543
Fig. 5 Learning curve for binary classification: a VGG16, b ResNet50, c VGG19, d Inception V3, e
ResNet152V2
The dataset used for evaluating the model is that the images used for training and test-
ing the model are taken in laboratory conditions. The performance accuracy reported for
the considered dataset might differ from that of real-time images due to variations in the
geometric factors of leaves and plant species. Thus, further research is required to extend
the proposed methodology to real-time images of different plant species involving different
resolution camera sources with minimal space, memory, and time to classify the results
accurately.
13
64544 Multimedia Tools and Applications (2024) 83:64533–64549
Fig. 6 The count of correctly classified and incorrectly classified images with prediction accuracy
Fig. 7 Web application interface: a Before uploading leaf image, b After uploading leaf image with the
identified disease
13
Multimedia Tools and Applications (2024) 83:64533–64549 64545
Table 5 Comparison of classification accuracy of the proposed approach with previous studies
Approach Application Accuracy (%) Ref
4 Conclusion
This study compared several CNN models trained through transfer learning and a new
training model from scratch. All the models are pre-trained on the ImageNet dataset, which
consists of 1.4 M images and 1000 different categories. The training models are VGG16,
ResNet50, VGG19, Inception V3, and ResNet152V2. The major conclusions drawn from
the current investigation are as follows.
• After examining loss, accuracy, classification report, confusion matrix, and other test-
ing metrics, it is determined that MobileNet model outperforms DenseNet201, VGG-
16, VGG-19, ResNet50, Inception V3. The proposed multiclass classification model
has a 97.35% accuracy, precision of 0.973, and recall of 0.973 each.
• Compared to VGG-19, VGG-16, Resnet50, Inception V3, and Resnet152V2, the sug-
gested model performs best for binary classification with an accuracy of 99.39%, loss
of 0.0361, precision of 0.989, and recall of 0.984 for plant disease detection.
• Using MobileNet, the created web application could instantly identify plant diseases
and inform the user.
This model’s performance can be assessed further by applying it to diverse crop leaf
datasets containing photos of numerous crop species. The different leaf positions could
learn how to spot leaf diseases at the titled or inclined positions of leaves. The investigation
could also be extended to examine other plant sections. This approach could incorporate
13
64546 Multimedia Tools and Applications (2024) 83:64533–64549
hardware to photograph a plant and identify its disease. The proposed model could be eval-
uated by applying it to real-time photographs to develop it as a user-friendly application
with memory, time, and cost-effectiveness. The classification models might also be trained
to provide the local crops with efficient disease control.
Abbreviations CNN: Convolutional neural network; FN: False Negative; FP: False Positive; kNN: K- Near-
est Neighbors; TN: True Negative; TP: True Positive; VGG: Visual geometry group
Acknowledgements The authors thank the SRM Institute of Science and Technology, Kattankulathur,
Chennai, India, for providing the required research infrastructure.
Author’s contributions Puja Singla: Conceptualization, Methodology, Design, Data collection, Data Visu-
alization, Formal analysis, Vijaya Kalavakonda: Conceptualization, Methodology, Design, Investigation.
Data collection, Data analysis, Writing- Original draft preparation, Ramalingam Senthil: Methodology, Data
analysis, Data curation, Formal analysis, Writing, Writing–review & editing.
Declarations
Resources The related data are discussed in the manuscript and the relevant resources are provided as follows.
Data: https://github.com/pujasingla/Plant-Village-Dataset
Codes: Binary Classification: https://github.com/pujasingla/Plant-Disease-Detection-Binary-Classification
Multi Classification/Web Application: https://github.com/pujasingla/Plant_LeafDiseaseDetection
References
1. Lee SH, Goëau H, Bonnet P, Joly A (2020) New perspectives on plant disease characterization based
on deep learning. Comput Electron Agric 170:105220. https://doi.org/10.1016/j.compag.2020.105220
2. Fisher MC, Gurr SJ, Cuomo CA, Blehert DS, Jin H et al (2020) Threats posed by the fungal kingdom
to humans, wildlife, and agriculture. MBio 11(3). https://doi.org/10.1128/mBio.00449-20
3. Gui P, Dang W, Zhu F, Zhao Q (2021) Towards automatic field plant disease recognition. Comput
Electron Agric 191:106523. https://doi.org/10.1016/j.compag.2021.106523
4. Astani M, Hasheminejad M, Vaghefi M (2022) A diverse ensemble classifier for tomato disease recog-
nition. Comput Electron Agric 198:107054. https://doi.org/10.1016/j.compag.2022.107054
5. Fan X, Luo P, Mu Y, Zhou R, Tjahjadi T, Ren Y (2022) Leaf image based plant disease identifica-
tion using transfer learning and feature fusion. Comput Electron Agric 196:106892. https://doi.org/10.
1016/j.compag.2022.106892
6. Chen J, Chen J, Zhang D, Sun Y, Nanehkaran YA (2020) Using deep transfer learning for image-based
plant disease identification. Comput Electron Agric 173:105393. https://doi.org/10.1016/j.compag.
2020.105393
7. Li Y, Nie J, Chao X (2020) Do we really need deep CNN for plant diseases identification? Comput
Electron Agric 178:105803. https://doi.org/10.1016/j.compag.2020.105803
8. Karthik R, Hariharan M, Anand S, Mathikshara P, Johnson A, Menaka R (2020) Attention embedded
residual CNN for disease detection in tomato leaves. Appl Soft Comput 86:105933. https://doi.org/10.
1016/j.asoc.2019.105933
9. Atila Ü, Uçar M, Akyol K, Uçar E (2021) Plant leaf disease classification using EfficientNet deep
learning model. Ecol Inform 61:101182. https://doi.org/10.1016/j.ecoinf.2020.101182
13
Multimedia Tools and Applications (2024) 83:64533–64549 64547
10. Shewale MV, Daruwala RD (2023) High performance deep learning architecture for early detection
and classification of plant leaf disease. J Agric Food Res 14:100675. https://doi.org/10.1016/j.jafr.
2023.100675
11. Sethy PK, Barpanda NK, Rath AK, Behera SK (2020) Deep feature based rice leaf disease identifi-
cation using support vector machine. Comput Electron Agric 175:105527. https://doi.org/10.1016/j.
compag.2020.105527
12. Abbas A, Jain S, Gour M, Vankudothu S (2021) Tomato plant disease detection using transfer learning
with C-GAN synthetic images. Comput Electron Agric 187:106279. https://doi.org/10.1016/j.compag.
2021.106279
13. Jiang Z, Dong Z, Jiang W, Yang Y (2021) Recognition of rice leaf diseases and wheat leaf diseases
based on multi-task deep transfer learning. Comput Electron Agric 186:106184. https://doi.org/10.
1016/j.compag.2021.106184
14. Dananjayan S, Tang Y, Zhuang J, Hou C, Luo S (2022) Assessment of state-of-the-art deep learning
based citrus disease detection techniques using annotated optical leaf images. Comput Electron Agric
193:106658. https://doi.org/10.1016/j.compag.2021.106658
15. Zeng W, Li M (2020) Crop leaf disease recognition based on Self-Attention convolutional neural net-
work. Comput Electron Agric 172:105341. https://doi.org/10.1016/j.compag.2020.105341
16. Ji M, Wu Z (2022) Automatic detection and severity analysis of grape black measles disease based on
deep learning and fuzzy logic. Comput Electron Agric 193:106718. https://doi.org/10.1016/j.compag.
2022.106718
17. Ravi V, Acharya V, Pham TD (2022) Attention deep learning-based large-scale learning classifier for
cassava leaf disease classification. Expert Syst 39(2). https://doi.org/10.1111/exsy.12862
18. Russel NS, Selvaraj A (2022) Leaf species and disease classification using multiscale paral-
lel deep CNN architecture. Neural Comput Appl 34:19217–19237. https://doi.org/10.1007/
s00521-022-07521-w
19. Xiao D, Zeng R, Liu Y, Huang Y, Liu J, Feng J, Zhang X (2022) Citrus greening disease recogni-
tion algorithm based on classification network using TRL-GAN. Comput Electron Agric 200:107206.
https://doi.org/10.1016/j.compag.2022.107206
20. Zhao Y, Sun C, Xu X, Chen J (2022) RIC-net: A plant disease classification model based on the
fusion of inception and residual structure and embedded attention mechanism. Comput Electron Agric
193:106644. https://doi.org/10.1016/j.compag.2021.106644
21. Cristin R, Santhosh Kumar B, Priya C, Karthick K (2020) Deep neural network based Rider-Cuckoo
Search Algorithm for plant disease detection. Artif intel Rev 53(7):4993–5018. https://doi.org/10.
1007/s10462-020-09813-w
22. MohantySharada P, Hughes DP, Salathé M (2016) Using deep learning for image-based plant disease
detection. Front Plant Sci 7:1419. https://doi.org/10.3389/fpls.2016.01419
23. Chen L, Cui X, Li W (2021) Meta-learning for few-shot plant disease detection. Foods 10(10):2441.
https://doi.org/10.3390/foods10102441
24. Hassan SM, Maji AK, Masiński M, Leonowicz Z, Jasińska E (2021) Identification of plant-leaf dis-
eases using CNN and transfer-learning approach. Electronics 10(12):1388. https://doi.org/10.3390/
electronics10121388
25. Singh V, Sharma N, Singh S (2021) A review of imaging techniques for plant disease detection. Artif
Intell Agric 4:229–242. https://doi.org/10.1016/j.aiia.2020.10.002
26. Zhang N, Yang G, Pan Y, Yang X, Chen L, Zhao C (2020) A review of advanced technologies and
development for hyperspectral-based plant disease detection in the past three decades. Remote Sens
12(19):3188. https://doi.org/10.3390/rs12193188
27. Kaur P, Harnal S, Tiwari R, Upadhyay S, Bhatia S, Mashat A, Alabdali AM (2022) Recognition of
leaf disease using hybrid convolutional neural network by applying feature reduction. Sensors 22:575.
https://doi.org/10.3390/s22020575
28. Singh RK, Tiwari A, Gupta RK (2022) Deep transfer modeling for classification of Maize Plant Leaf
Disease. Multimed Tools Appl 81:6051–6067. https://doi.org/10.1007/s11042-021-11763-6
29. Pardede HF, Suryawati E, Zilvan V, Ramdan A, Kusumo RBS, Heryana A, Yuwana RS, Krisnandi D,
Subekti A, Fauziah F, Rahadi VP (2020) Plant diseases detection with low resolution data using nested
skip connections. J Big Data 7:57. https://doi.org/10.1186/s40537-020-00332-7
30. Bedi P, Gole P (2021) Plant disease detection using hybrid model based on convolutional autoencoder
and convolutional neural network. Artif Intell Agric 5:90–101. https://doi.org/10.1016/j.aiia.2021.05.
002
31. Albattah W, Nawaz M, Javed A, Masood M, Albahli S (2022) A novel deep learning method for
detection and classification of plant diseases. Complex Intell Syst 8:507–524. https://doi.org/10.1007/
s40747-021-00536-1
13
64548 Multimedia Tools and Applications (2024) 83:64533–64549
32. Sharma S, Sharma G, Menghani E, Sharma A (2023) A comprehensive review on automatic detec-
tion and early prediction of tomato diseases and pests control based on leaf/fruit images, Lect Notes
Netw Sys 599 LNNS, pp 276–296. https://doi.org/10.1007/978-3-031-22018-0_26
33. Karthika I, Megha M, Roshni M (2023) deep learning approach to automated tomato plant leaf dis-
ease diagnosis. Proceedings of the 2023 2nd International Conference on Electronics and Renewable
Systems, ICEARS 2023, pp 1381–1388. https://doi.org/10.1109/ICEARS56392.2023.10085564
34. Kukadiya H, Meva D (2022) Automatic cotton leaf disease classification and detection by convo-
lutional neural network. Communications in Computer and Information Science, 1759 CCIS, pp
247–266. https://doi.org/10.1007/978-3-031-23092-9_20
35. Shukla PK, Sathiya S (2022) Early detection of potato leaf diseases using convolutional neural
network with web application. Proceedings - 2022 IEEE World Conference on Applied Intelligence
and Computing, AIC 2022, pp 277–282. https://doi.org/10.1109/AIC55036.2022.9848975
36. Paiva-Peredo E (2023) Deep learning for the classification of cassava leaf diseases in unbalanced
field data set. Communications in Computer and Information Science, 1798 CCIS, pp 101–114.
https://doi.org/10.1007/978-3-031-28183-9_8
37. Yadav R, Pandey M, Sahu SK (2022) Cassava plant disease detection with imbalanced dataset
using transfer learning. Proceedings - 2022 IEEE World Conference on Applied Intelligence and
Computing, AIC 2022, pp 220–225. https://doi.org/10.1109/AIC55036.2022.9848882
38. Geetharamani G, Arun Pandian J (2019) Identification of plant leaf diseases using a nine-layer deep
convolutional neural network. Comput Electr Eng 76:323–338. https://doi.org/10.1016/j.compe
leceng.2019.04.011
39. Rosmala D, PrakhaAnggara MR, Sahat JP (2021) Transfer learning with VGG16 and InceptionV3
model for classification of potato leaf disease. J Theor Appl Inf Technol 99(2):279–292
40. Tassis LM, Tozzi de Souza JE, Krohling RA (2021) A deep learning approach combining instance
and semantic segmentation to identify diseases and pests of coffee leaves from in-field images.
Comput Electron Agric 186:106191. https://doi.org/10.1016/j.compag.2021.106191
41. Tiwari V, Joshi RC, Dutta MK (2021) Dense convolutional neural networks based multiclass plant
disease detection and classification using leaf images. Ecol Inform 63:101289. https://doi.org/10.
1016/j.ecoinf.2021.101289
42. Tiwari V, Joshi RC, Dutta MK (2022) Deep neural network for multi-class classification of medici-
nal plant leaves. Expert Syst 39(8):e13041. https://doi.org/10.1111/exsy.13041
43. Ennouni A, Sihamman NO, Sabri MA, Aarab A (2021) Early detection and classification approach
for plant diseases based on MultiScale image decomposition. J Comput Sci 17(3):284–295. https://
doi.org/10.3844/JCSSP.2021.284.295
44. Barman U, Choudhury RD, Sahu D, Barman GG (2020) Comparison of convolution neural net-
works for smartphone image based real time classification of citrus leaf disease. Comput Electron
Agric 177:105661. https://doi.org/10.1016/j.compag.2020.105661
45. Rahman CR, Arko PS, Ali ME, Iqbal Khan MA, Apon SH, Nowrin F, Wasif A (2020) Identifica-
tion and recognition of rice diseases and pests using convolutional neural networks. Biosyst Eng
194:112–120. https://doi.org/10.1016/j.biosystemseng.2020.03.020
46. Hanh BT, Van Manh H, Nguyen N (2022) Enhancing the performance of transferred efficientnet
models in leaf image-based plant disease classification. J Plant Dis Prot 129(3):623–634. https://
doi.org/10.1007/s41348-022-00601-y
47 Kumar Y, Hasteer N, Bhardwaj A, Yogesh (2022) Convolutional neural network architecture for
detection and classification of diseases in fruits. Curr Sci 122(11):1315–1320. https://doi.org/10.
18520/cs/v122/i11/1315-1320
48. Waldamichael FG, Debelee TG, Ayano YM (2022) Coffee disease detection using a robust
HSV color-based segmentation and transfer learning for use on smartphones. Int J Intell Syst
37(8):4967–4993. https://doi.org/10.1002/int.22747
49. Matarese V (2022) Kinds of replicability: different terms and different functions. Axiomathes
32(Suppl 2):647–670. https://doi.org/10.1007/s10516-021-09610-2
50. Baker M (2020) Why scientists must share their research code. Nature. https://doi.org/10.1038/
nature.2016.20504
51. Idicula SM, David Peter S (2007) A multilingual query processing system using software agents. J
Digit Inf Manag 5(6):385–390
52. Derici C, Aydin Y, Yenialaca C, Aydin NY, Kartal G, Özgür A, Güngör T (2018) A closed-domain
question answering framework using reliable resources to assist students. Nat Lang Eng 24(5):725–
762. https://doi.org/10.1017/S1351324918000141
13
Multimedia Tools and Applications (2024) 83:64533–64549 64549
53. Hossain MI, Jahan S, Al Asif MR, Samsuddoha M, Ahmed K (2023) Detecting tomato leaf diseases
by image processing through deep convolutional neural networks. Smart Agricultural Technology
5:100301. https://doi.org/10.1016/j.atech.2023.100301
54. Singh G, Yogi KK (2023) Comparison of RSNET model with existing models for potato leaf disease
detection. Biocatal Agric Biotechnol 50:102726. https://doi.org/10.1016/j.bcab.2023.102726
55. Hari P, Singh MP (2023) A lightweight convolutional neural network for disease detection of fruit
leaves. Neural Comput Appl 35(20):14855–14866. https://doi.org/10.1007/s00521-023-08496-y
56 Mohammed EA, Mohammed GH (2023) Citrus leaves disease diagnosis. Indones J Electr Eng Comput
Sci 31(2):925–932. https://doi.org/10.11591/ijeecs.v31.i2.pp925-932
57. Ahad MT, Li Y, Song B, Bhuiyan T (2023) Comparison of CNN-based deep learning architectures for
rice diseases classification. Artif Intell Agric 9:22–35. https://doi.org/10.1016/j.aiia.2023.07.001
58. Islam MM, Adil MAA, Talukder MA, Ahamed MKU, Uddin MA, Hasan MK, Sharmin S, Rahman
MM, Debnath SK (2023) DeepCrop: deep learning-based crop disease prediction with web applica-
tion. J Agric Food Res 14:100764. https://doi.org/10.1016/j.jafr.2023.100764
59. Singh P, Singh P, Farooq U, Khurana SS, Verma JK, Kumar M (2023) CottonLeafNet: cotton plant leaf
disease detection using deep neural networks. Multimed Tools Appl 82(24):37151–37176. https://doi.
org/10.1007/s11042-023-14954-5
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and applicable
law.
13