0% found this document useful (0 votes)
15 views100 pages

Plant Report

The document is a final year project report from Usman Institute of Technology on 'Plant Recognition and Disease Detection Software' developed by a group of students. The software aims to enhance plant care and disease prevention using machine learning and computer vision techniques, providing users with tools for accurate plant identification and disease management. The report includes acknowledgments, an abstract, and detailed sections on the project's objectives, technical architecture, algorithms, and expected outcomes.

Uploaded by

Rizwana Qurban
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views100 pages

Plant Report

The document is a final year project report from Usman Institute of Technology on 'Plant Recognition and Disease Detection Software' developed by a group of students. The software aims to enhance plant care and disease prevention using machine learning and computer vision techniques, providing users with tools for accurate plant identification and disease management. The report includes acknowledgments, an abstract, and detailed sections on the project's objectives, technical architecture, algorithms, and expected outcomes.

Uploaded by

Rizwana Qurban
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 100

USMAN INSTITUTE OF TECHNOLOGY

Affiliated with NED University of Engineering & Technology, Karachi

Department of Computer Science


B.S. Computer Science / Software Engineering
FINAL YEAR PROJECT REPORT
Batch-2020
PLANT RECOGNITION AND DISEASE DETECTION SOFTWARE
By
Rumaisa Batool 20B-004-SE
Maheen Zafar Khan 20B-061-SE

Syeda Yusra Atif 20B-064-SE


Rabeya Khan 20B-088-SE

Supervised by

ENGR. FAUZAN SAEED

ST-13, Block 7, Gulshan-e-Iqbal, Abul Hasan Isphahani Road, Opposite Safari Park, P.O. Box 75300,

Karachi, Pakistan. Phone: 34978274-5; 34994305; 34982476; http://www.uit.edu


Submission Performa
Name (1) Rumaisa Batool

(2) Maheen Zafar Khan

(3) Syeda Yusra Atif

(4) Rabeya Khan

Address (1) 20b-004-se

(2)20b-061-se

(3)20b-064-se

(4)20b-088-se

PLANT RECOGNITION AND DISEASE DETECTION SOFTWARE

Sir Fauzan Saeed


This report is submitted as required for the Project in accordance with the rules laid
down by the Usman Institute of Technology as part of the requirements for the award
of the degree of Bachelor of Computer Science/Software Engineering. I declare that
the work presented in this report is my own except where due reference or
acknowledgment is given to the work of others.

Signatures of students Date

(1)…………………………….. ……………………..

(2)……………………………. …………………….

(3)…………………………….. …………………….

(4) …………………………….. …………………….

i
Acknowledgments

We sincerely first of all thanks to Almighty Allah and everyone who has contributed
to the "Plant Recognition and Disease Detection Software" project's successful
completion. It has taken teamwork to complete this project. We would like to express
our gratitude to the team members whose commitment and knowledge were essential
to the creation of this product. We appreciate the contributions made by the scholarly
and scientific communities to the fields of plant pathology, machine learning, and
computer vision. The project was made possible by the current information. Finally,
we would like to express our gratitude to the users farmers, gardeners, and plant
enthusiasts for their possible usage of this software. An application designed with
your requirements and expectations in mind has been developed with the goal of
improving your experience with plant care and disease prevention was developed
based on your requirements and expectations. The "Plant Recognition and Disease
Detection Software" project has been fashioned by the collaborative spirit and group
efforts that are honored with this commendation

ii
Abstract

The "Plant Recognition and Disease Detection Software" project offers a substantial
improvement in the field of plant care and disease prevention. This software program
attempts to close the gap between plant care and disease prevention with an emphasis
on empowering people, improving user experiences, and supporting sustainable
agriculture. The study expands on the impressive advancements in machine learning
and computer vision methods for plant identification and disease detection. By
providing users with a cutting-edge tool for precise plant identification, illness
detection, and efficient preventive measures, the project seeks to transform plant care
practices. This project contributes to the larger objective of improving plant
biodiversity and supporting sustainable agriculture.

iii
TABLE OF CONTENTS

Submission Performa......................................................................................................i
PLANT RECOGNITION AND DISEASE DETECTION SOFTWARE......................i
List of tables.................................................................................................................vii
List of figures..............................................................................................................viii
Application Programming Interface Units:...................................................................ix
CHAPTER I...………………………………………………………………………...1
1.1 Introduction…………………………………………………………...……………
1
1.2 Role Of Image Processing:....................................................................................11
1.3 System Diagram:....................................................................................................11
1.4 Tabular Form Of Disease Detection:.....................................................................12
1.5 Subprojects:...........................................................................................................13
1.6 Aim Of Project:......................................................................................................14
1.7 Statement Of Problem And Solution:....................................................................14
1.7.3 User Interface Design..........................................................................................15
1.7.4 Database Development:......................................................................................15
1.8 Expected Outcome:................................................................................................15
1.9 Conclusion:............................................................................................................16
CHAPTER II….…………...………………………………………………………….2
2.1 Introduction………………………………………………………………...………
2
2.3 Similar Application………………………………………………………..….......10
2.4 Current Work………………………………………………………………..........11
2.5 Related Work:........................................................................................................22
2.6 Gaps in Current Knowledge:..................................................................................22
2.7 List Of Previous Similar Software:........................................................................22
2.8 Algorithms:............................................................................................................23
2.9 Main Features And Technical Interfaces:..............................................................23
2.10 Technical Interfaces.............................................................................................24
CHAPTER III...…………...………………………………………………………...18
Introduction............................................................................................................26
3.2
Hardware………………………………………………………………………….18
3.3 Software……………………………………………………………...…………...18
iv
3.4 Libraries:................................................................................................................27
3.5 Algorithms:............................................................................................................28
3.6 Requirements..........................................................................................................21
3.7 Conclusion:............................................................................................................29
CHAPTER 1V………...……………………………………………………………..22
4.1 Introduction…………………………………...
…………………………………..22
4.2usecase Diagram……………………………………..……………………………22
4.3 Activity Diagram:..................................................................................................31
4.4 System Diagram:....................................................................................................32
4.5 Class Diagram:.......................................................................................................33
4.6 Entity Relation Diagram………………………………………………………….27
4.7 Sequence Diagram………………………………………………………………..28
4.7.1 Description:.........................................................................................................37
4.8 Object Diagram:.....................................................................................................38
4.9 Component Diagram:.............................................................................................39
4.10 Deployment Diagram:..........................................................................................41
4.11 Operational Diagram............................................................................................42
CHAPTER V…………………………………….…………………………..……....37
5.1 Introduction…………………………………………………………………...
…..37
5.2 Convolutional Neural Network………………………………………………...…
37
5.3 Algorithm…………………………………………………………………………37
5.4 Pseudo Code…………………………………………………………………...
….38
5.5 Complexities Of The Algorithm Worst Case.........................................................45
5.6 Complexities Of The Algorithm Best Case............................................................46
5.7 Comparison Of Algorithms:...................................................................................46
5.8 Algorithms:.............................................................................................................47
5.8.4 Data Preprocessing:.............................................................................................48
5.9 Conclusion:.............................................................................................................48
CHAPTER VI……………………………….....……………………………...…….43
6.1 Front End:...............................................................................................................49
6.2 Backend:.................................................................................................................53

v
CHAPTER VII.............................................................................................................72
Testing..........................................................................................................................72
7.1 Introduction:...........................................................................................................72
7.2 Objectives Of Testing:...........................................................................................72
7.3 Types Of Testing:...................................................................................................72
7.4 Functional Testing:.................................................................................................73
7.4.1 Types And Techniques Of Functional Testing:..................................................73
7.5 Non Functional Testing:.........................................................................................74
7.5.1 Types And Techniques Of Non Functional Testing:..........................................74
7.8 Functional Testing:.................................................................................................79
7.8.1 Conclusion Of Functional Testing......................................................................80
7.9 Error Handling Testing...........................................................................................80
7.9.1 Conclusion Of Error Handling Testing...............................................................81
7.10 Regression Testing..............................................................................................81
7.10.1 Conclusion Of Regression Testing....................................................................82
7.11 Integration Testing...............................................................................................82
7.11.1 Conclusion Of Integration Testing....................................................................83
7.12 Unit Testing..........................................................................................................83
7.12.1 Conclusion Of Unit Testing..............................................................................84
7.13 Decision Testing...................................................................................................84
7.14.1 Home Page:.......................................................................................................86
7.14.2 Upload Image Page……………………………………………………………80
7.14.3 Result
Page…………………………………………………………………….81
CHAPTER VIII……………………………………………………………………..82
Appendices...................................................................................................................82
CHAPTER IX.............................................................................................................94
Achievements……..………………………………………………………………….88
CHAPTER X...............................................................................................................95
Future Enchancement

vi
List of tables

Table 1.4 Tabular Form Of Disease Detection


Table 2.7 List Of Previous Similar Software
Table 5.7 Comparison Of Algorithms
Table 7.6 Black Box Testing
Table 7.7 White Box Testing
Table 7.8 Functional Testing
Table 7.9 Error Handling Testing
Table 7.10 Regression Testing
Table 7.11 Integration Testing
Table 7.12 Unit Testing
Table 7.13 Decision Testing

vii
List of figures

Figure 1.3 SYSTEM DIAGRAM

igure 4.2.1 USE CASE DIAGRAM

Figure 4.3.1 ACTIVITY DIAGRAM

Figure 4.4.1 SYSTEM DIAGRAM

Figure 4.5.1 CLASS DIAGRAM

Figure 4.6.1 ENTITY RELATIONSHIP D DIAGRAM

Figure 4.7.1 SEQUENCE DIAGRAM

Figure 4.8.1 OBJECT DIAGRAM

Figure 4.9.1 COMPONENT DIAGRAM

Figure 4.10.1 DEPLOYMENT DIAGRAM

Figure 4.11.1 OPERATIONAL DIAGRAM

viii
List of symbols and Units

Symbols:

CNNs: Convolutional Neural Networks

GUI: Graphical User Interface

DB: Database

AI: Artificial Intelligence

ML: Machine Learning

UI: User Interface

DBMS: Database Management System API:

Application Programming Interface Units:

Accuracy: Could be measured in percentage (%)

Precision: Also a measure, often used in context of accuracy

Disease Detection Rate: Percentage or ratio

Cost: Measured in currency (e.g., dollars, euros)

Time: Measured in hours, minutes, or seconds

Database Size: Measured in storage units (e.g., megabytes, gigabytes)

Resolution: For images, measured in pixels

Deployment Success Rate: Percentage or ratio

ix
INTRODUCTION

INTRODUCTION

The project is about “Plant Recognition and Disease Detection Software”. The
introduction establishes the scope of the project by describing the domain of the
problem and its importance. It effectively conveys the objective of the software
app to bridge the gap between plant care and the prevention of disease through
cutting various technologies. It's all about empowering users, making the user
experience better, and making a difference in sustainable agriculture Crop yield
and health are crucial in today's agricultural environment. However, obtaining
ideal yields is seriously threatened by plant diseases, which has an effect on both
food security and economic stability. Conventional disease detection techniques
are time-consuming, expensive, and prone to human error since they frequently
depend on manual inspection by skilled professionals. Technological
developments have opened the door for creative solutions to these problems, such
software that detects plant diseases.

This explores the creation and application of a state-of-the-art plant disease


detection system that makes use of machine learning (ML) and artificial
intelligence (AI) methods. The program is designed to give agronomists, hobbyist
and farmers a dependable, effective, and easy-to-use tool for managing and
diagnosing diseases early on and prevent with cure. The program can recognize a
variety of plant illnesses from digital photographs by applying image processing
techniques and predictive analytics. It then provides timely insights and treatment
recommendations.

This project's main goal is to reduce crop losses from diseases in order to increase
agricultural production and sustainability. This program encourages a more
ecologically friendly method of farming by supporting proactive disease control
and lowering the need for chemical pesticides. This report's subsequent parts will
address the software's technical architecture, the AI models used, data gathering

Page 10 of 101
and preprocessing techniques, user interface design, and any possible effects on the
agriculture sector. [1]

1.2 ROLE OF IMAGE PROCESSING:

Image processing plays a crucial role in plant disease detection software by


enabling the analysis and interpretation of visual information from images of
plants. To guarantee consistency and concentrate on pertinent areas, it starts with
obtaining high-quality photos and proceeds through preprocessing stages such
noise reduction, normalization, and cropping. Segmentation is a technique used to
isolate particular sections of interest and separate the plant from the background.
Then, using a variety of algorithms, key properties like color, texture, and form are
retrieved to potentially identify disease signs. To identify illness trends, these
features are incorporated into deep learning networks or machine learning models,
such as Convolutional Neural Networks (CNNs). Lastly, the software's user-
friendly interface offers visual feedback and suggestions, helping farmers and
agronomists with early diagnosis and The process involves capturing, enhancing,
and analyzing images to identify patterns or anomalies associated with plant
diseases. Image processing is integral to plant disease detection software as it
enables the efficient and accurate analysis of visual information, leading to early
disease detection and improved crop management. Image processing is often
integrated with machine learning techniques, such as deep learning, to recognize
patterns associated with specific diseases. Trained models can learn from labeled
datasets to identify and classify diseases in new images. [37] Continuous
monitoring of plant health through image processing allows for early detection of
diseases, enabling timely intervention to prevent the spread. The combination of
image processing and machine learning enhances the software's ability to handle
complex and large-scale agricultural datasets.[2]

1.3 SYSTEM DIAGRAM:

System diagrams typically show the components of a process, input, output


including the hardware, software, databases, and people involved, as well as the
communication pathways between them.

Page 11 of 101
DESCRIPTION:
The above diagram illustrates the sequential flow of the system. At first, the user
inputs an image, initiating the image pre-processing phase. In the process, relevant
leaf features are extracted. These extracted features are then compared with the
dataset. The system uses this comparison to classify the image and predict which
disease class the data provided belongs to. [38]

1.4 TABULAR FORM OF DISEASE DETECTION:


Following are the plant disease symptoms which are written below in tabular form.
Here we are discussing about the plant, disease and its symptoms.

Page 12 of 101
1.5 SUBPROJECTS:

The subprojects of our project Plant Recognition And Disease Detection Software
consist the following:

1.5.1 Plant Identification System Development:

This subproject focuses on building a robust plant identification system. It involves


the implementation of Convolutional Neural Networks (CNNs) for accurate
classification of plant species based on input images. The development team will
explore the utilization of large-scale plant image datasets and pre-trained CNN
models, fine-tuning them with plant-specific datasets to achieve high accuracy.
[39]

1.5.2 Disease Detection Algorithm Implementation:

In this subproject, the team will delve into the implementation of disease detection
algorithms. The goal is to analyze plant images for visual symptoms and identify
common plant diseases. The approach includes leveraging CNNs and possibly
image segmentation algorithms for precise disease localization. Recommendations
for preventive measures and management strategies will be integrated into the
system.

1.5.3 User Interface Design And Experience:

This subproject is dedicated to crafting a user-friendly interface for seamless


interaction. It involves designing and implementing features that enable users to
easily capture and upload plant images, view identification results, and access
disease-related information. The emphasis is on creating an intuitive and engaging
user experience.

Page 13 of 101
1.5.4 Database Development:

Comprehensive databases for plant species information and diseases are crucial
components. [40]This subproject involves the development and integration of
these databases. The plant database will contain details such as names,
characteristics, and care instructions, while the disease database will include
symptoms and recommended management strategies.

1.5.5 Documentation And Reporting:

Documentation is a vital aspect of the project. This subproject focuses on creating


a detailed set of documents including system requirements, design specifications, a
user manual, and a comprehensive project report. This documentation will serve as
a reference for future maintenance, further development, and understanding of the
software application.

1.6 AIM OF PROJECT:

The aim of this project is to develop a comprehensive software application that


seamlessly integrates plant identification, disease detection, and preventive
measures. The primary goal is to empower plant enthusiasts, gardeners, and
farmers with a tool that enables them to accurately identify plants, gain insights to
effectively manage their health. Through the incorporation of advanced image
recognition algorithms and disease detection techniques, the project aspires to
contribute to sustainable agriculture. [41]

1.7 STATEMENT OF PROBLEM AND SOLUTION:

It consist the following:

1.7.1 Plant Identification:


Problem Definition: The existing plant identification applications lack a holistic
approach, often providing limited information about identified disease and lacking
in accuracy.

Page 14 of 101
Scope of Effort: This project aims to address the accuracy by implementing a robust
system that can analyze plant based on input images with a high degree of precision.

1.7.2 Disease Detection:

Problem Definition: Current applications struggle with accurate and timely disease
detection in plants, hindering effective preventive measures.

Scope of Effort: The project tackles this challenge by implementing advanced


disease detection algorithms, leveraging CNNs and image segmentation
techniques. It aims to identify common plant diseases based on visual symptoms
and provide users with recommended preventive measures and management
strategies.

1.7.3 User Interface Design:

Problem Definition: Many existing applications lack an intuitive and user-friendly


interface, impacting user experience.

Scope of Effort: The project addresses this by ensuring the development of a user-
friendly interface. Users should be able to easily capture or upload plant images,
view identification results, and navigate through the software seamlessly. [42]

1.7.4 Database Development:

Problem Definition: Limited availability of comprehensive disease databases affects the


depth of information provided to users.

Scope of Effort: The project involves the development and integration of


comprehensive database containing detailed information on plant and disease-
related data. This ensures accurate identification and provides valuable insights to
users.

1.8 EXPECTED OUTCOME:

It is anticipated that the application of plant disease detection software will result in a
number of noteworthy consequences that enhance agricultural sustainability and
output. The software's main goal is to accurately and early identify plant illnesses,
Page 15 of 101
which is essential for prompt management and action. Farmers can apply targeted
treatments to minimize crop losses and increase yields by diagnosing diseases early
on. Time and labor costs can be saved by using the software's user-friendly tool, which
lessens the requirement for in-depth field inspections and expert expertise. A wide
spectrum of users can take advantage of the software's features because of its
accessibility and ease of use on mobile devices and other digital platforms.

1.9 CONCLUSION:

These types of application are used in variety of areas which increases efficiency
of modern age, also keep the plant secure from the disease. This applications make
you modernize or efficient. This app help to reduce so many bacterial diseases
from the plant. It is user friendly app which is not only for the farm owners but
also for the people who has in gardening but does not know that how to keep safe
their plants from disease. We have compared different application features with
each other so we can add some more important feature in our plant disease
detection application. The previous application have some issues that why the
other new application are made so in most of them the data is already stored in the
database and it only shows the information and some application have detection
option. [4]

Page 16 of 101
BACKGROUND AND LITERATURE REVIEW

2.1 INTRODUCTION

The field of plant identification and disease detection has witnessed significant
advancements, particularly through the application of computer vision and machine
learning techniques. The literature review explores existing knowledge, highlighting the
state of the art in the domain and laying the groundwork for the project. These days, one of
the biggest concerns is the early diagnosis of plant diseases. The farm owner and the
gardeners is suffering significant losses as a result of the agricultural worth of their land
progressively declining daily. Less than 10% of farmers have formal education, so the
majority of them lack knowledge about proper plant cultivation techniques. Instead, they
rely on their own experience rather than a scientific approach, which is why most plants die
because their owners are unable to identify diseases and apply the appropriate pesticides. If
plant diseases are not treated when they first arise, the cost of creation may increase
significantly because the illness has the potential to spread across the entire.[5]

2.2 BACKGROUND AND LITERATURE REVIEW:

The farming industry remains one of the most impactful in the worldwide market as well as
in food supply, yet it remains plagued by constant threats posed by diseases affecting crops.
Current methods used in disease diagnosis in plants involve the physical examination of the
plants which is usually time consuming, expensive, and prone to human influence.
Additionally, these approaches may not be easily implemented by farmers who have low
education standards and little capital to invest in their production. Therefore, the
requirement for better and much more effective and easily implemented methods and
techniques for early detection of plant diseases has fueled the scientific innovations in this
area. The use of computer vision and machine learning presents approaches that could be

Page 17 of 101
useful in solving these challenges by using algorithms that can automatically identify such
conditions as well as give timely, accurate diagnoses.

The application of image processing with machine learning techniques has become a
revolutionized instrument in plant disease diagnosis. Techniques like eliminating
background noises, standardization and erasing boundaries also contribute a lot towards
preparing good images for further analysis. These techniques help to emphasize the
apparent features, which can help identify diseases, the sample picture elements that are the
color, texture, and the shape of the plant parts. This pre-processed visual data are used to
feed the machine learning models especially the CNNs that are able to identify and
diagnose diseases from a given set of examples that have undergone training. Quantitative
studies have also revealed that these models can have fairly low error rates in diagnosis of
different diseases in plants that are captured in images and are therefore very useful in the
modern farming practice.

Recent research has positively elaborated on these technologies in the context of


agriculture. For instance, Ferentinos (2018) showed that CNNs could provide up to 99%
accuracy in certain learning paradigms. These included 53% correct identification of plant
diseases affecting different crops. Likewise, Mohanty, Hughes, and Salathé (2016) also
established high precision of diseases classification by deep learning models irrespective of
changes in circumstances. These outcomes shed light on the possibility of incorporating
artificial intelligence solutions to support farming, calling for active prevention of
infections and abstinence from chemical use. In addition, the level of integration of such
systems makes it possible for the small holder farmers to also reap from the technology
thereby improving productivity and sustainability of agriculture.

However, the authors pointed out several limitations in the present work and future research
on plant disease detection software, which are as follows: Another challenge is that the
captured images may not always be of good quality and environmental conditions might not
favor disease diagnosis. Furthermore, competition consists of creating new and detailed
annotated datasets for training machine learning models, while time-consuming. To address
these challenges, there is a constant update of the image processing algorithms and the
augmentation of the database to encompass different types of crops and disease states. The
idea is to spur collaborations between researchers, agricultural specialists and technology

Page 18 of 101
developers to design and develop sound, easy to use applications from the ground up that
incorporate the requirements of farmers worldwide.

The present development in image processing and machine learning techniques employed in
plant disease detection can be considered as advancement in agricultural science. The
information that reaches the farmers through proposed technologies helps them to diagnose
ailing crops early and prescribe appropriate treatments that improve farming productivity.
The ultimate aim of such innovations is not only to increase the yield but also to develop a
sustainable way of farming that does not involve heavy use of chemical pesticides. To sum it
up, this literature review establishes the need for interdisciplinary research and use of
advanced technologies in solving one of the major problems affecting agriculture.

2.3 SIMILAR APPLICATIONS:

Several applications have addressed the intersection of plant identification and disease
detection, contributing valuable insights. Notable among them is Plant Vision, a mobile
application that combines deep learning for plant species recognition and image analysis
for disease detection. The main features include real-time identification and disease
diagnosis, making it a relevant reference for our project. Deployment strategies have
shown success in both desktop and mobile environments, and cost considerations revolve
around maintenance and updates.

2.3.1 Plantix:

Plantix is an app designed in 2015 for farmers and gardeners to diagnose plant diseases
and nutrient deficiencies. Users can upload images of their plants, and the app utilizes
image recognition and machine learning algorithms to identify diseases and provide
suitable treatment recommendations. It is an application based app which is only based on
android mobiles.

2.3.2 Leaf Doctor:

Leaf Doctor is another application that utilizes image processing to identify plant diseases.
It allows users to take pictures of affected leaves, and the app then analyzes the images to
diagnose diseases and suggests potential solutions. It is only usable for IOS systems.[45]
Page 19 of 101
2.3.3 Croprotect:

Croprotect is an online UK based platform that offers real-time disease and pest
information for crops. It provides comprehensive information on various diseases, pests,
and their management in agriculture, assisting farmers in disease diagnosis and
management.

2.3.4 Agrio:

Agrio is not basically free mobile application but it firstly gives 30 days' free trial for use.
It was invented on June 28, 2017. Here it is necessary to create account. It doesn't detect
the disease name but it upload the image on your account and the other farmer/user can
help you in searching your solution or can give you the solution on the image comment
box which is not good for up loader.[46] If nobody answered so you will not be able to get
to find about the disease in your plant.

2.4 CURRENT WORK:

Image processing is now rapidly growing faster as compare to any other fields. This field is
really required more and more effective work, deep learning is too much vast field in
computer science. Image processing is mainly used in colors changing in picture,
background changing in images etc. We have used k-mean clustering in the pre-processing
phase, basically it will cluster the images into dots. We use the following classification and
disease technique and algorithms in our application:

2.4.1 Classification and disease technique:

The classification and detection techniques that can be used for plant leaf disease
classification. Image is captured and then it is realized to match the size of the image to be
stored in the database. At that point the picture is improved in quality and commotions are
evacuated. Here pre-process is done before feature extraction.

Page 20 of 101
2.4.2 RGB Image Conversion:

To extract the vein image from each leaf, RGB photos are first transformed to white, and then
to greyscale. Next, fundamental morphological operations are performed on the picture. After
then, the picture is changed to a binary image. The binary pixel value is then translated to the
equivalent RGB picture value if it is 0 or 10. Ultimately, the sickness is detected utilizing
person correlation, the dominating feature set, and a neural network.

2.4.3 Image Pre-processing:

Since the images were taken in the actual field, there may be water stains, spores, and residue
as commotion. Information pre-processing serves to remove noise from images and change
the values of individual pixels. It improves the image's quality. In order to readily diagnose
disease, the k-mean is utilized in image pre-processing to turn the images into dots. We're
going to do the following.
 Color space conversion
 Filtering
 Smoothing

2.4.4 Image Segmentation:

The process of dividing an advanced image into distinct pieces is known as picture division
(sets of pixels, otherwise called picture objects). Division's underlying goal is to reorganize a
picture's representation into something more meaningful and easily identifiable. Therefore, the
leaf image will be divided into several sections to make it simpler to identify the location of
the primary problem. The following characteristics are necessary.

 Edge based
 Region based
 Clustering based

2.4.5 Feature Extraction:

Highlight extraction is the significant part to nimbly anticipate the contaminated locale.
Here shape and textural feature extraction is done. The shape situated component extraction
like Area, Color hub length, whimsy, robustness and border are determined. So also the
Page 21 of 101
surface arranged component extraction like difference, connection, vitality, homogeneity
and mean. It will focus on

• Color
• Shape
• Texture

2.5 Related Work:

Research by Smith et al. (2020) explores a comprehensive plant care system similar to our
project, integrating identification, disease detection, and care recommendations. The system
utilizes a combination of handcrafted features and CNNs for identification and employs
clustering algorithms for disease categorization. Understanding the technical interface, their
work emphasizes seamless user interaction and efficient information retrieval.

2.6 Gaps in Current Knowledge:

While existing applications show that there is a noticeable gap in achieving a holistic
solution that seamlessly integrates plant identification, disease detection, and preventive
measures. Many applications lack accurate disease detection capabilities, and there is a need
for more detailed information on managing and preventing plant diseases. The identified
gaps form the basis for our project's objectives and prevent the disease. [6]

2.7 List Of Previous Similar Software:

Following are the list of previous similar features:

Page 22 of 101
2.8 ALGORITHMS:

The project can involve various algorithms, including:

2.8.1 Convolutional Neural Networks (CNNs):

CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different species or disease categories

2.8.2 Feature Extraction Algorithms:

These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant
identification or disease detection.

2.8.3 Image Segmentation:

Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.

Page 23 of 101
2.8.4 Data Preprocessing:

Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the
models.

2.9 MAIN FEATURES AND TECHNICAL INTERFACES:

2.9.1 Automated Disease Detection

Image Capture and Upload: Users can capture images of plants using their smartphones or
digital cameras and upload them to the software.

Real-Time Analysis: The software processes images in real-time, providing immediate feedback
on the health status of the plant.

2.9.2 Advanced Image Processing

Preprocessing: The software automatically performs noise reduction, normalization, and cropping
to ensure high-quality images are analyzed.

Segmentation: It isolates the plant from the background and focuses on regions of interest, such as
leaves and fruits, where disease symptoms are most likely to appear.

2.9.3 Feature Extraction and Analysis

Color Analysis: Detects changes in color that may indicate diseases.

Texture Analysis: Identifies texture variations that are symptomatic of different plant diseases.

Shape Analysis: Monitors deformations or abnormalities in the shape of plant parts.

2.9.4 Machine Learning and AI Integration

Model Training: Utilizes large datasets of plant images to train machine learning models,
particularly Convolutional Neural Networks (CNNs), for high accuracy in disease detection.

Page 24 of 101
Continuous Learning: The system can be updated with new data to improve its accuracy and
adapt to new disease variants.

2.9.5 Diagnostic Reports and Recommendations

Disease Identification: Provides detailed information on the detected disease, including symptoms
and potential causes.

Treatment Suggestions: Offers recommended treatments and management practices based on the
diagnosed disease.

2.10 Technical Interfaces

2.10.1 User Interface (UI)

Mobile Application: A dedicated app for smartphones and tablets that allows users to capture
images, upload them, and receive diagnostic results on the go.

Web Portal: A responsive web-based interface accessible via browsers, providing similar
functionalities as the mobile app.

2.10.2 API Integration

RESTful API: Allows integration with other agricultural management systems and third-party
applications. We used axios library in frontend which provides a simple and intuitive API for
making HTTP requests. Developers can specify request parameters such as URL, method,
headers, data, and query parameters using a clean and expressive syntax. It is also used for
connecting backend and frontend through API. This API enables external systems to send images
for analysis and retrieve diagnostic results.[48]

2.10.3 Image Processing Engine

Backend Server: Handles image preprocessing, segmentation, feature extraction, and disease
detection. This server is optimized for performance to ensure real-time analysis.

Machine Learning Models: Hosted on the backend server, these models process the features
extracted from the images to detect and classify diseases.

Page 25 of 101
By combining these features and technical interfaces, the plant disease detection software aims to
provide a comprehensive, efficient, and user-friendly solution for modern agricultural practices.

2.11 CONCLUSION:

These kinds of applications are employed in many different contexts to improve current efficiency
and protect plants from illness. These apps help you become more efficient or modern. These
apps aid in lowering the number of bacterial illnesses that affect plants. It is not just for those who
own farms; it is also for those who enjoy gardening but are unsure of how to protect their plants
from illness. In order to improve our plant disease detection application, we have compared
several application aspects with one another. Since the earlier application had certain problems,
more recent ones were created, and in the majority of them, the data was already kept in the
database it only shows the information and some applications have detection option.[7]

HARDWARE, SOFTWARE ANALYSIS AND REQUIREMENTS

INTRODUCTION

For any application, the right hardware and software are necessary and crucial components. We
are unable to create any kind of program without these two elements. This project primarily uses
a scalable, straightforward image processing technique. It can view the image from any angle,
including up, down, left, right, behind, and in front. The dataset performs well in image
segmentation (splitting a digital image into several segments) and image processing. Since Python
contains the necessary libraries, it is the programming language that we are targeting with our
tools.[8]

HARDWARE:

The software application can be developed to run on various hardware devices such as desktop
computers, laptops, smartphones, and tablets with standard camera capabilities.

Page 26 of 101
3.3 SOFTWARE:

In plant disease detection web application, which will be developed using Python language which
contains many useful tools Jupiter notebook for back-end programming and html CSS for web
front-end programming. It consist:

.3.1 Front End:

The front end is developed using a react. The choice of these technologies ensures a responsive
and visually appealing user interface.

3.3.2 Back End:

The back end is implemented in Python, leveraging the django framework. Python's versatility
and the simplicity of django align with the project's development goals.

3.3.3 Machine Learning Frameworks:

TensorFlow and Keras are employed for developing and deploying machine learning models,
especially CNNs, for plant identification and disease detection.

3.3.4 Database:

SQL Server Mangement Studio is used for the database, providing a lightweight and easily
deployable solution for storing plant and disease information.

3.4 LIBRARIES:

Some libraries that can be leveraged for this project include.[9]

3.4.1 OpenCV:

OpenCV (Open Source Computer Vision Library) provides a wide range of functions and
algorithms for image and video processing, including image recognition and feature extraction

Page 27 of 101
3.4.2 TensorFlow:

TensorFlow is an open-source machine learning framework that can be used for training and
deploying deep learning models, such as Convolutional Neural Networks (CNNs), which are
essential for image classification tasks.

3.4.3 Keras:

Keras is a high-level neural networks API that can serve as an abstraction layer on top of
TensorFlow. It provides a user-friendly interface for building and training deep learning models.
[49]

3.4.4 Scikit-learn:

Scikit-learn is a popular machine learning library in Python that offers a wide range of
algorithms and tools for classification tasks.

3.5 ALGORITHMS:

The project can involve various algorithms, including:

3.5.1 Convolutional Neural Networks (CNNs):

CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different species or disease categories. [50]

3.5.2 Feature Extraction Algorithms:

These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant identification or
disease detection.

Page 28 of 101
3.5.3 Image Segmentation:

Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.

3.5.4 Data Preprocessing:

Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the models.

3.6 REQUIREMENTS:

 As a user, I want to capture images of plants, so that I can identify the plant disease
accurately.
 As a user, I want an intuitive interface, so that I can easily navigate through the application,
view identification results.
 As a user, I want to receive information about the plant disease, so that I can understand its
condition.
 As a user, I expect the application to provide guidance for the identified plant disease so that I
can take proper care of plant.
 As a user, I want to detect common diseases in plants based on visual symptoms so that I can
implement timely.

3.7 CONCLUSION:

In conclusion, by fusing scalable image processing methods with an intuitive user interface, this
plant disease detection project aims to provide a complete solution. Using HTML, CSS,
JavaScript, and Python for development, the application makes sure it works with a range of
hardware. Precise plant identification and disease detection are made possible by the application
of TensorFlow and Keras to machine learning, specifically Convolutional Neural Networks
(CNNs). The project encompasses a wide range of functionality using OpenCV for image and
video processing, SQLite for lightweight database administration, and scikit-learn for further
machine learning tools. The application's analytical skills are improved with the addition of
algorithms for feature extraction, data preprocessing, and image segmentation. The user

Page 29 of 101
requirements place a strong emphasis on necessity of precise disease detection, user-friendly
navigation, comprehensive disease information, and instructions for plant care. Overall, this
research prioritizes user experience and practicality in plant disease management in addition to
addressing the technical aspects of image processing and machine learning.[10]

SOFTWARE DESIGN AND MODELING

INTRODUCTION

In software design, developers translate the functional requirements into a structured plan,
determining the architecture, components, and interactions within the system. Modeling involves
creating visual representations, such as UML diagrams, to depict the system's structure, behavior,
and interactions, aiding both developers and stakeholders in understanding and communicating
complex software designs.[51] Effective software design and modeling not only contribute to the
clarity of system architecture but also facilitate collaboration among development teams,
streamline the implementation process, and ultimately result in the delivery of high-quality
software that aligns with user expectations and business needs.[11]
Page 30 of 101
USECASE DIAGRAM:

In use case diagram it itself may enter a great deal of insight concerning each chance, use-case
case diagram can help give a larger amount perspective of the system. They give the streamlined
and graphical representation of what the system should really do.

4.2.1 Use case diagram

4.2.1 Description:

Here's a description of the elements in the diagram:

Actor: User

Represents the primary actor interacting with the system.

Use Cases:

Upload Image:

Enables the user to upload images, possibly related to plants or diseases.

Enables the user to browse the image.


Page 31 of 101
View Plant Identification:

Permits the user to view identification information about plants based on the uploaded images.

View Disease Detection Results:

Provides the user with information on disease detection results related to the uploaded images.

User then generate report and save report.

4.3 ACTIVITY DIAGRAM:

It is a behavioral diagram and advance form of flow char that modeling the flow form one activity
to another activity.

Page 32 of 101
4.3.1 Activity diagram

4.3.1 Description:

The diagram illustrates the system's sequential process. Upon user image input, the system
validates the format. Valid images undergo feature extraction for disease detection. If there is a
disease, the system presents a thorough report with information, precautions, and necessary care,
along with an option to generate a report. If there is no disease, a status message stating that no
disease is detected.

4.4 SYSTEM DIAGRAM:

System diagrams typically show the components of a process, input, output including the
hardware, software, databases, and people involved, as well as the communication pathways
between them.

4.4.1 System diagram

4.4.1 Description:

The above diagram illustrates the sequential flow of the system. At first, the user inputs an image,
initiating the image pre-processing phase. In the process, relevant leaf features are extracted.
These extracted features are then compared with the dataset. The system uses this comparison to
classify the image and predict which disease class the data provided belongs to.

Page 33 of 101
4.5 CLASS DIAGRAM:
Class UML diagram is the foremost generally recognized graph sort for programming
documentation. Since most computer program being made is still based on the OOP worldview,
utilizing course graphs to record the software.

4.5.1 Class diagram

4.5.1 Description:

The provided diagram contains the classes that covers the whole project's work. The classes are:

User: It represents the end-user interacting with the system, responsible for uploading, browse
images, predict image as well as generating and viewing reports.

Image Processor: It is responsible for handling the important preprocessing operations, such as
feature extraction, segmentation, and disease classification, then return if there is a disease present
with disease name.

Page 34 of 101
Report Generator: It manages the creation of disease report, also includes an option to save the
generated report, enhancing the system's functionality.

Disease: It provides necessary detail of various diseases, including their names, symptoms, and
care instructions. It ensures the system's comprehensive understanding of disease-related
information.

Contact us: It provides user name email and if any feedback they want to give.

4.6 ENTITY RELATION DIAGRAM:

Entity Relationship Diagram, also known as ERD, ER Diagram or ER model, is a type of


structural diagram for use in database design. [52]An ERD contains different symbols and
connectors that visualize two important information: The major entities within the system scope,
and the inter-relationships among these entities.

Page 35 of 101
4.6.1 Entity Relation diagram

4.6.1 Description:

The provided ERD diagram outlines the entities like Image, Report, Plant Type and Disease Info.
Users can upload multiple images, each associated with one thorough report providing
information about the detected disease, precautions, and necessary care. User-Image (one-to-
many), Image-Report (one-to-one), Image-Disease Info (oneto-many), and Disease Info-Report
(one-to-one) are the relationships that are involved. Validity checks assure that the format of
uploaded photos is correct. User ID, Image ID, Report ID, Disease ID, and timestamps are
examples of important properties. The data connected and structure of the plant disease detection
system are briefly illustrated in this ERD.

Page 36 of 101
4.7 SEQUENCE DIAGRAM:

In this figure the image is selected by browsing image. Then the image is sent to the server. The
processing process of the image is started as the features are extracted and then the segmentation
of image is done. The result is predicted and shown on the main screen in the form of disease
name with its cure.

4.7.1 Sequence diagram

4.7.1 DESCRIPTION:

Select Image from Device

Page 37 of 101
The user selects an image containing relevant data, possibly related to a medical condition.

Browse Image

The user uploads the selected image to the system, triggering the process of analysis and
prediction.

Feature Extraction

As the image is received, the Prediction life line kicks in, performing feature extraction on the
uploaded image to identify key characteristics.

Segmentation:

The system then proceeds with segmentation, isolating specific regions of interest within the
image for more precise analysis.

Classify Disease:

The prediction engine classifies the features, determining the potential disease or condition. The
identified disease is then presented to the user for review.

Give Details:

Simultaneously, the system gives detail in report.

Show Result (User Life Line):

The analysis results, including the identified disease and suggested cures, are presented in report
to the user for their understanding and consideration.

Generate Report (Result Life Line):

The user generates a comprehensive report.

Save Report:

The user can access to save the report.

Page 38 of 101
4.8 OBJECT DIAGRAM:
An object diagram is a graph of occurrences, including objects and data values. An item chart an
occurrence of a class graph.

4.8.1 Object diagram

4.8.1 Description:

An object diagram represents classes' instances and their relationships at a specific point in time.
In the context of our plant disease detection system, the provided object diagram portrays the state
that the system is in when a user uploads an image and the system is processing it. Within this
scenario, several key objects are identified as follows:

1. User: This instance represents the user who uploaded the image.

2. Image: This represents the uploaded image and its details.

3. Image Processor: This represents the instance responsible for processing the uploaded image.

4. Disease Information: This represents the details regarding the detected disease, known as a
crucial component of the processed information.

Page 39 of 101
5. Report Generator: This represents the instance responsible for generating a comprehensive
disease report based on the processed image and associated data.

4.9 Component Diagram:

4.9.1 Component diagram

4.9.1 Description:

User: Represents the end-user interacting with the system.

Device: Acts as the medium through which the user interacts with the system, usually a personal
computer or a mobile device.

Page 40 of 101
Select Image: Functionality that lets the user choose an image for analysis.

Browse Image: Allows the user to navigate through images stored within the system.

View Application: The interface that the user interacts with.

View Result: Displays the analysis results to the user.

Plant Disease Detection: The central system responsible for processing and analyzing the images.

Segmentation: Similar to the first diagram, it processes the image by segmenting it to facilitate
detailed analysis.

Feature Extraction: Extracts crucial features from the image necessary for disease detection.

Prediction: Analyzes the segmented and feature-extracted data to predict the disease.

Web server: Facilitates the exchange of data between the client-side and the server-side.

Page 41 of 101
4.10 Deployment Diagram:

4.10.1 Deployment diagram

4.10.1 Description:

Device: Serves as the central interface for input and output operations within the application
server.

View Application: A functionality that allows the user to interact with the application’s interface.

Browse Image: Allows the user to browse and select images within the application.

Page 42 of 101
Feature Extraction: A process where key features from the selected images are extracted for
analysis.

Segmentation: This process involves dividing the image into parts or segments to simplify or
change the representation of the image into something more meaningful and easier to analyze.

Prediction: Based on the extracted features and image segmentation, the application predicts the
disease.

Cure: Provides suggestions or measures based on the prediction results.

Select Image: Allows users to select specific images for processing.

View Result: Enables the user to view the outcomes of the analysis.

Webserver: Handles HTTP requests from the application server, facilitating communication
between the server and client-side.

4.11 Operational Diagram

4.11.1 Operational diagram

Page 43 of 101
4.11.1 Description

An operational diagram of plant disease detection software in which user firstly connected to a
desktop then user send http request to a internet then this request from internet further send to a
webserver then from web server this send to a html web resource then is return html web resource
display to web server and in response webserver send to desktop then desktop again send http
protocol to web server then this web server send response send to desktop then from desktop
search image send to webserver then webserver further send image acces to file server which
returns display to the desktop.[53]

Page 44 of 101
ALGORITHM ANALYSIS AND COMPLEXITY

INTRODUCTION

We are discussing about algorithms which are used in our project plant disease detection
application which is an image processing projects. Also talking about their complexity, we are
also discussing bout different type of algorithm which are used in image processing projects.

CONVOLUTIONAL NEURAL NETWORK

Image recognition is the task of taking an input image and outputting a class that describes the
best of image. Convolutional Neural Networks (CNN) is the most common approach that has
been used widely for the problems that are used to resolve visual perception and also it seem to
outperform all other techniques.[13]. The Convolutional Neural Networks (CNN) is categorized
by a superior architecture composed of alternating convolutional and pooling layers optionally
monitored by fully connected layers. This has been stated as successful at extracting and
combining the features from an image that been input. Here all the input layers are fully
connected with the hidden layers

ALGORITHM:

Algorithm for plant disease detection application

1) Initialize the weights

2) Multiply weights by input and sum them up

3) Compare the results against the threshold to compute the output (1 or 0)

4) Update the weights

5) Import the required libraries.

6) Read from file

Page 45 of 101
7) Divide the data in training data, testing data, validating data

8) Train our model

9) Select the features of plants which is used for prediction 10) Try:
Set plant image as cv2.cvtColor and change in gray

Start model prediction of plant disease and store in result If percentage > 50 return pred disease
name with its prevention Else

pred name="not recognized"

11) Back to main plant test screen

PSEUDO CODE:

Step 1

Train,test and validate the data.

Step 2

Layers with kernel size and map the features.

Step 3

Samples the data found train, test and validate it.

Step 4

Accuracy generate with updated weights.

Step 5

Return the classification result

Page 46 of 101
5.5 COMPLEXITIES OF THE ALGORITHM WORST CASE

For each and every data point in each batch during each epoch, for instance, the worst-case
scenario would involve multiple layers going through convolution, activation, pooling, and full
connectivity. Therefore, the sum of these complexities can be used to express the worst-case time
complexity.[54] Thus, if the network has L layers and the B batches have E epochs. The
following will be the worst-case time complexity:

= O (L*E*B*W*H*D*F*F*C)

Where,

W= Width of the input

H =Height of the input

D =Depth of the input

F =Size of the filter

C =Number of filters

5.6 COMPLEXITIES OF THE ALGORITHM BEST CASE

The complexity for best case is:

=O (W*H*D*F*F*C)

Where,

W = Width of the input

H = Height of the input

D = Depth (number of channels) of the input

F = Size of the filter

C = Number of filters

Page 47 of 101
5.7 COMPARISON OF ALGORITHMS:

The comparison between CNN, RNN and GAN is described as follows:

CNN RNN GAN

Concolutional Neural Network Recurrent Neural Network Generative Adversarial


Network.

CNN is made for spatial data RNN is made for temproal data. It as a form of unsupervised
such as images. learning.

CNN works on fixed input size RNN works on arbitrary input / GAN is to generate a complex
andr generate fixed output. output lengths. output for a simple input.

In image processing and video In text and speech analysis RNN In text to image, image to
processing CNN is used. isused. translation GAN is used.

5.8 ALGORITHMS:

The project can involve various algorithms, including:

5.8.1 Convolutional Neural Networks (CNNs):

CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different disease categories.

5.8.2 Feature Extraction Algorithms:

These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant identification or
disease detection.

Page 48 of 101
5.8.3 Image Segmentation:

Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.

5.8.4 Data Preprocessing:

Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the models.

5.9 CONCLUSION:

In conclusion, the plant disease detection application employs Convolutional Neural Networks
(CNNs) as a primary algorithm for image classification, allowing the model to effectively extract
features from plant images and categorize them based on disease. The algorithmic approach
involves initializing weights, training the model, and utilizing a step-by-step process for image
prediction and disease identification. The pseudo code outlines the key steps, including data
division, layer mapping, and result classification. A comparison between CNN, Recurrent Neural
Network (RNN), and Generative Adversarial Network (GAN) emphasizes the specialization of
each algorithm in spatial data, temporal data, and unsupervised learning, respectively. The project
also incorporates feature extraction algorithms, image segmentation, and data preprocessing
techniques to enhance the accuracy and robustness of disease detection. Overall, the combination
of these algorithms provides a comprehensive and effective solution for plant disease
identification in image processing projects. [14]

Page 49 of 101
IMPLEMENTATION

6.1 Front End:

Testing.jsx:

import React, { useState } from 'react';

import axios from 'axios';

const Testing = () => {

const [selectedFile, setSelectedFile] = useState(null);

const [responseData, setResponseData] = useState(null);

const handleFileUpload = async () => {

if (!selectedFile) {

console.error('Please select a file.');

return;

const formData = new FormData();

formData.append('image', selectedFile);

Page 50 of 101
try {

const response = await


axios.post('http://127.0.0.1:8000/step-by-step-processing/api/process-image/', formData);

console.log(response.data);

setResponseData(response.data);

} catch (error) {

console.error('Error uploading image:', error);

};

const handleInputChange = (event) => {

const file = event.target.files[0];

if (file && file.type === 'image/jpeg') {

setSelectedFile(file);

} else {

console.error('Please select a .jpg file.');

};

Page 51 of 101
return (

<div>

<input type="file" accept=".jpg" onChange={handleInputChange} />

<button onClick={handleFileUpload}>Process Model</button>

{responseData && (

<div>

<h2>Results</h2>

<div>

<h4>Initial Image:</h4>

<img src={`data:image/jpeg;base64,${responseData.initial_image_uri}`}
alt="Initial" />

<p>{responseData.initial_message}</p>

</div>

<div>

<h4>Masked Image:</h4>

<img src={`data:image/jpeg;base64,${responseData.masked_image_uri}`}
alt="Masked" />

</div>

<div>

<h4>Segmented Image:</h4>
Page 52 of 101
<img src={`data:image/jpeg;base64,$
{responseData.segmented_image_uri}`} alt="Segmented" />

</div>

<div>

<h4>Damage Analysis Image:</h4>

<img src={`data:image/jpeg;base64,$
{responseData.damage_analysis_uri}`} alt="Damage Analysis" />

<p>Total Discolored Area: {responseData.total_percentage}%</p>

<p>Damage Classification: {responseData.damage_classification}</p>

</div>

<div>

<h4>Recommendation:</h4>

<p>{responseData.recommendation}</p>

</div>

</div>

)}

</div>

);

};

export default Testing;

Page 53 of 101
6.2 Backend:

Views.py:

from django.http import JsonResponse

from django.views.decorators.csrf import csrf_exempt

from .masking import apply_mask

from .segmentation import process_segmentation

from .analysis import process_damage_analysis

from .utils import get_cure_recommendation

from .model_processing import process_image as process_initial_image

@csrf_exempt

def process_image_api(request):

if request.method == 'POST' and request.FILES.get('image'):

image_file = request.FILES['image']

initial_results, image_stream = process_initial_image(image_file)

image_stream.seek(0)

masked_image_uri = apply_mask(image_stream)

image_stream.seek(0)

segmented_image_uri = process_segmentation(image_stream)

image_stream.seek(0)

damage_analysis_uri, total_percentage, damage_classification =


process_damage_analysis(image_stream)

Page 54 of 101
recommendation = get_cure_recommendation(damage_classification, plant_id=1)

response_data = {

'initial_message': initial_results['result_text'],

'initial_image_uri': initial_results['image_uri'],

'masked_image_uri': masked_image_uri,

'segmented_image_uri': segmented_image_uri,

'damage_analysis_uri': damage_analysis_uri,

'total_percentage': total_percentage,

'damage_classification': damage_classification,

'recommendation': recommendation

return JsonResponse(response_data)

return JsonResponse({'error': 'No image provided'}, status=400)

Utils.py:

# utils.py in your Django app directory (e.g., step_by_step_processing/utils.py)

from .data import plant_disease_data

def get_cure_recommendation(damage_classification, plant_id):

plant_info = next((item for item in plant_disease_data if item["id"] == plant_id), None)

if not plant_info:

Page 55 of 101
return "No data available for this plant."

intensity_level = 'low' if damage_classification == "lightly damaged" else 'medium' if


damage_classification == "medium damaged" else 'high'

return plant_info['Intensity'].get(intensity_level, "No specific cure available.")

Data.py:

# data.py in your Django app directory (e.g., step_by_step_processing/data.py)

plant_disease_data = [

"Plant Name": "Tomato",

"Disease": "Septoria Leaf Spot",

"Cause": "fungus Septoria lycopersici",

"Cure": "Removal and destruction of affected plant parts...",

"Intensity": {

"low": "Use chlorothalonil and mancozeb Fungicides...",

"medium": "use Fluxapyroxad 250 g/l + Pyraclostrobin 250 g/l SC...",

"high": "Remove or burn infected leaves",

},

"id": 1

},

Page 56 of 101
"Plant Name": "Tomato",

"Disease": "Yellow Leaf Curl Virus",

"Cause": "viruses in the Geminivirus family of plant viruses",

"Cure": "Planting TYLCV-resistant varieties when appropriate...",

"Intensity": {

"low": "Spraying fungicides such as Bravo...",

"medium": "Reduce viral inoculum by destroying crop residues...",

"high": "Cut off the plant below the bag and allow bag with plant...",

},

"id": 2

},

"Plant Name": "Grape",

"Disease": "Black Measles",

"Cause": "phaeomoniella spp infestans",

"Cure": "Remove the infected berries, leaves and trunk...",

"Intensity": {

"low": "Use of essential oils such as and Mancozeb fungicides...",

"medium": "Protect the prune wounds to minimize fungal infection...",

"high": "Remove the infected berries, leaves and trunk...",

},

Page 57 of 101
"id": 3

},

"Plant Name": "Potato",

"Disease": "Early Blight",

"Cause": "Alternaria solani",

"Cure": "Use foliar fungicides. Apply Septum (a combination of phenol molecules, saponins,
flavonoids and silicic acid obtained from Equisetum arvense extract)...",

"Intensity": {

"low": "Use foliar fungicides. Apply Septum...",

"medium": "Apply crop rotation and eradicate weed hosts...",

"high": "Burn or bag infected plant parts",

},

"id": 4

},

"Plant Name": "Potato",

"Disease": "Late Blight",

"Cause": "Phytophthora infestans",

"Cure": "Protective spraying with mancozeb or zineb 0.2 % should be done to prevent
infection of tubers...",

"Intensity": {
Page 58 of 101
"low": "Destruction of the foliage few days before harvest is beneficial...",

"medium": "The resistant varieties recommended are Kufri Naveen, Kufri Jeevan...",

"high": "Burn or bag infected plant parts",

},

"id": 5

Analysis.py:

import numpy as np

import cv2

from io import BytesIO

import base64

import matplotlib.pyplot as plt

def calculate_percentage_of_color(segmented_image, lower_bound, upper_bound):

hsv_image = cv2.cvtColor(segmented_image, cv2.COLOR_BGR2HSV)

mask = cv2.inRange(hsv_image, lower_bound, upper_bound)

mask_area = np.sum(mask > 0)

leaf_area = np.sum(cv2.cvtColor(segmented_image, cv2.COLOR_BGR2GRAY) > 0)

percentage = (mask_area / leaf_area) * 100 if leaf_area > 0 else 0

return mask_area, percentage, mask

Page 59 of 101
def classify_damage(percentage):

if percentage < 30:

return "lightly damaged"

elif percentage < 50:

return "medium damaged"

else:

return "highly damaged"

def draw_contours_around_damaged_area(segmented_image, mask):

contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,


cv2.CHAIN_APPROX_SIMPLE)

cv2.drawContours(segmented_image, contours, -1, (0, 255, 0), 2)

return segmented_image

def process_damage_analysis(segmented_image_stream):

image_array = np.asarray(bytearray(segmented_image_stream.read()), dtype=np.uint8)

segmented_image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)

lower_bound_dark_brown = np.array([10, 100, 20])

upper_bound_dark_brown = np.array([30, 255, 150])

lower_bound_gray = np.array([0, 0, 50])

upper_bound_gray = np.array([180, 50, 150])

Page 60 of 101
# Analysis

_, dark_brown_percentage, mask_dark_brown =
calculate_percentage_of_color(segmented_image, lower_bound_dark_brown,
upper_bound_dark_brown)

_, gray_percentage, mask_gray = calculate_percentage_of_color(segmented_image,


lower_bound_gray, upper_bound_gray)

combined_mask = cv2.bitwise_or(mask_dark_brown, mask_gray)

segmented_with_contours = draw_contours_around_damaged_area(segmented_image.copy(),
combined_mask)

segmented_with_contours_rgb = cv2.cvtColor(segmented_with_contours,
cv2.COLOR_BGR2RGB)

plt.figure(figsize=(5, 5))

plt.imshow(segmented_with_contours_rgb)

plt.title("Segmented Image with Damage Contours")

plt.axis('off')

buf = BytesIO()

plt.savefig(buf, format='png')

buf.seek(0)

segmented_with_contours_uri = base64.b64encode(buf.read()).decode('utf-8')

plt.close()

total_percentage = dark_brown_percentage + gray_percentage

damage_classification = classify_damage(total_percentage)

return segmented_with_contours_uri, total_percentage, damage_classification

Page 61 of 101
Segmentation.py:

import cv2

import numpy as np

from io import BytesIO

import base64

import matplotlib.pyplot as plt

def convert_to_hsv(image):

return cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

def create_mask_for_diseased_area(hsv_image, lower_bound_dark_brown,


upper_bound_dark_brown, lower_bound_gray, upper_bound_gray):

mask_dark_brown = cv2.inRange(hsv_image, lower_bound_dark_brown,


upper_bound_dark_brown)

mask_gray = cv2.inRange(hsv_image, lower_bound_gray, upper_bound_gray)

combined_mask = cv2.bitwise_or(mask_dark_brown, mask_gray)

return combined_mask

def segment_diseased_area(original_image, mask):

return cv2.bitwise_and(original_image, original_image, mask=mask)

def process_segmentation(image_stream):

image_array = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)

original_image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)

hsv_image = convert_to_hsv(original_image)
Page 62 of 101
lower_bound_dark_brown = np.array([10, 100, 20])

upper_bound_dark_brown = np.array([30, 255, 150])

lower_bound_gray = np.array([0, 0, 50])

upper_bound_gray = np.array([180, 50, 150])

mask = create_mask_for_diseased_area(hsv_image, lower_bound_dark_brown,


upper_bound_dark_brown, lower_bound_gray, upper_bound_gray)

segmented_image = segment_diseased_area(original_image, mask)

# Convert to RGB for display

segmented_image_rgb = cv2.cvtColor(segmented_image, cv2.COLOR_BGR2RGB)

plt.figure(figsize=(5, 5))

plt.imshow(segmented_image_rgb)

plt.title("Segmented Image")

plt.axis('off')

buf = BytesIO()

plt.savefig(buf, format='png')

buf.seek(0)

segmented_image_uri = base64.b64encode(buf.read()).decode('utf-8')

plt.close()

return segmented_image_uri

Models.py:

Page 63 of 101
from django.db import models

# Create your models here.

Model_processing.py:

from tensorflow.keras.preprocessing.image import img_to_array

from tensorflow.keras.models import load_model

import numpy as np

from PIL import Image as PilImage

from io import BytesIO

import base64

import os

def process_image(image_file):

model = load_model(os.path.join('step_by_step_processing', 'plant_disease_model_v6.h5'))

image_stream = BytesIO(image_file.read())

image_stream.seek(0)

image = PilImage.open(image_stream)

img_resized = image.resize((224, 224))

img_array = img_to_array(img_resized)

img_array = np.expand_dims(img_array, axis=0) / 255.0

prediction = model.predict(img_array)

Page 64 of 101
confidence = np.max(prediction)

class_names = ['Apple cedar Rust', 'Apple healthy', 'Apple scab',

'Grape black Measles', 'Grape healthy', 'Grape Isariopsis Leaf Spot',

'Potato Early Blight', 'Potato Healthy', 'Potato Late Blight',

'Tomato healthy', 'Tomato Yellow Leaf Curl Virus', 'Tomato septoria leaf spot']

predicted_class = class_names[np.argmax(prediction)]

result_text = f"Prediction: {predicted_class} with confidence {confidence:.2f}"

image_stream.seek(0)

return {'result_text': result_text, 'image_uri':


base64.b64encode(image_stream.getvalue()).decode('utf-8')}, image_stream

Masking.py:

import cv2

import numpy as np

import matplotlib.pyplot as plt

from io import BytesIO

import base64

def apply_mask(image_stream):

image_array = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)

Page 65 of 101
image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)

image_resize = cv2.resize(image, (256, 256))

hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

fixed_lower_s = 41

fixed_upper_h = 83

lower_green = np.array([0, fixed_lower_s, 0])

upper_green = np.array([fixed_upper_h, 255, 255])

mask = cv2.inRange(hsv, lower_green, upper_green)

result = cv2.bitwise_and(image, image, mask=mask)

result_rgb = cv2.cvtColor(result, cv2.COLOR_BGR2RGB)

plt.figure(figsize=(5, 5))

plt.imshow(result_rgb)

plt.title('Masked Image')

plt.axis('off')

buf = BytesIO()

plt.savefig(buf, format='png')

buf.seek(0)

image_uri = base64.b64encode(buf.read()).decode('utf-8')

plt.close()

return image_uri

Page 66 of 101
Settings.py:

"""

Django settings for fyprumaisa project.

Generated by 'django-admin startproject' using Django 5.0.4.

For more information on this file, see

https://docs.djangoproject.com/en/5.0/topics/settings/

For the full list of settings and their values, see

https://docs.djangoproject.com/en/5.0/ref/settings/

"""

from pathlib import Path

# Build paths inside the project like this: BASE_DIR / 'subdir'.

BASE_DIR = Path(__file__).resolve().parent.parent

# Quick-start development settings - unsuitable for production

# See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/

# SECURITY WARNING: keep the secret key used in production secret!

SECRET_KEY = 'django-insecure-esuwla17au+*yh_&3w&bi%ie=wr=26r4-s241tt-hoj$pr-ows'

# SECURITY WARNING: don't run with debug turned on in production!

DEBUG = True

ALLOWED_HOSTS = []

# Application definition

INSTALLED_APPS = [

Page 67 of 101
'django.contrib.admin',

'django.contrib.auth',

'django.contrib.contenttypes',

'django.contrib.sessions',

'django.contrib.messages',

'django.contrib.staticfiles',

'home',

'ml_app',

'ml_tests',

'step_by_step_processing',

'rest_framework',

'corsheaders',

MIDDLEWARE = [

'django.middleware.security.SecurityMiddleware',

'django.contrib.sessions.middleware.SessionMiddleware',

'django.middleware.common.CommonMiddleware',

'django.middleware.csrf.CsrfViewMiddleware',

'django.contrib.auth.middleware.AuthenticationMiddleware',

'django.contrib.messages.middleware.MessageMiddleware',

'django.middleware.clickjacking.XFrameOptionsMiddleware',

Page 68 of 101
'corsheaders.middleware.CorsMiddleware',

'django.middleware.common.CommonMiddleware',

ROOT_URLCONF = 'fyprumaisa.urls'

CORS_ALLOW_ALL_ORIGINS = True

TEMPLATES = [

'BACKEND': 'django.template.backends.django.DjangoTemplates',

'DIRS': [],

'APP_DIRS': True,

'OPTIONS': {

'context_processors': [

'django.template.context_processors.debug',

'django.template.context_processors.request',

'django.contrib.auth.context_processors.auth',

'django.contrib.messages.context_processors.messages',

],

},

},

WSGI_APPLICATION = 'fyprumaisa.wsgi.application'

Page 69 of 101
# Database

# https://docs.djangoproject.com/en/5.0/ref/settings/#databases

DATABASES = {

'default': {

'ENGINE': 'django.db.backends.sqlite3',

'NAME': BASE_DIR / 'db.sqlite3',

# Password validation

# https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators

AUTH_PASSWORD_VALIDATORS = [

'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',

},

'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',

},

'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',

},

Page 70 of 101
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',

},

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',

'LOCATION': 'unique-snowflake',

# Internationalization

# https://docs.djangoproject.com/en/5.0/topics/i18n/

LANGUAGE_CODE = 'en-us'

TIME_ZONE = 'UTC'

USE_I18N = True

USE_TZ = True

# Static files (CSS, JavaScript, Images)

# https://docs.djangoproject.com/en/5.0/howto/static-files/

STATIC_URL = 'static/'

# Default primary key field type

# https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field

DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'

Page 71 of 101
CHAPTER VII

Testing
7.1 INTRODUCTION:

Testing is a crucial stage in the creation of software for plant disease detection since it guarantees
accurate, effective, and dependable system operation in a range of scenarios. It consists of a
number of methodical tasks intended to assess the functionality, correctness, usability, and
Page 72 of 101
resilience of the software[15]. Through thorough testing, developers can find and fix bugs,
confirm that the program satisfies requirements, and confirm that it works as intended in
practical situations.[16]

7.2 OBJECTIVES OF TESTING:

 To guarantee that the program accurately and precisely detects and diagnoses plant diseases.
 To assess how well the machine learning models in particular, the Convolutional Neural
Networks (CNNs) perform in identifying patterns associated with disease in pictures.
 To evaluate the software's processing speed and response time to make sure it can effectively
perform real-time analysis.
 To evaluate the system's scalability in handling big datasets and high-resolution photos.
 To confirm that farmers and agronomists can simply navigate and utilize the program due to
its intuitive and user-friendly user interface.
 In order to guarantee that the program offers concise and actionable feedback to to confirm
that farmers and agronomists can simply navigate and utilize the program due to its intuitive
and user-friendly user interface.
 To confirm that farmers and agronomists can simply navigate and utilize the program due to
its intuitive and user-friendly user interface.[55]
 To guarantee that the program offers concise and actionable feedback to users.
 To guarantee that the program can function dependably in a variety of settings, such as
altered lighting or image quality.
 To evaluate the software's resistance to foreseeable problems like corrupted photos or
inadequate data.[17]

7.3 TYPES OF TESTING:

1. Functional Testing
2. Non Functional Testing

7.4 FUNCTIONAL TESTING:

Software testing that concentrates on confirming that the program operates in accordance with
the given criteria is known as functional testing. It guarantees that every feature of the software
program performs in accordance with the requirements. User interface, APIs, databases, security,
client/server apps, and software functionality are all checked during this kind of testing.[18]
Validating the software system against the functional requirements and specifications is the aim.
Page 73 of 101
7.4.1 TYPES AND TECHNIQUES OF FUNCTIONAL TESTING:

1. Unit Testing: Unit testing is a software testing methodology that involves testing individual
software units or components separately. These units usually correspond to the smallest
software components that can be tested, like modules, functions, or methods.
2. Acceptance Testing: User acceptability testing (UAT), referred to as acceptance testing, is
an essential stage of software development when the program is assessed to make sure it
satisfies end users' or stakeholders' needs and expectations. In contrast to unit testing, which
concentrates on testing specific code units, acceptance testing assesses whether the system as
a whole complies with the requirements and is ready for deployment.[19]
3. Integration Testing: Software modules or components are merged and evaluated as a group
to make sure they function as intended as part of the integration testing technique. This kind
of testing focuses on confirming how various software components interact with one another
and looking for any flaws or problems that may occur during integration.
4. System Testing: System testing is a thorough stage in the software development life cycle
when the integrated software system as a whole is assessed to make sure it satisfies
requirements and operates as intended in its intended setting.[20] Instead than concentrating
on specific modules or units, this kind of testing is carried out on the system as a whole,
including all of its parts and subsystems.
5. Black Box Testing: Testing without knowledge of the internal workings of the application.
Focuses on input and output.
6. White Box Testing: Testing with knowledge of the internal workings of the application.
Focuses on code structure.
7. Regression Testing: Software testing techniques like regression testing are used to
determine whether recent code changes have had an impact on the program's current
functionality. Throughout the software development lifecycle, regression testing plays a
crucial role in preserving the software's stability and integrity.[56]

7.5 NON FUNCTIONAL TESTING:

Aspects of the software including performance, scalability, and usability that might not be
connected to a particular function or user action are referred to as non-functional testing. Instead
than concentrating on particular actions, this kind of testing focuses on how the system functions.
[21]

Page 74 of 101
7.5.1 TYPES AND TECHNIQUES OF NON FUNCTIONAL TESTING:

i. Performance Testing
ii. Load Testing
iii. Stress Testing
iv. Usability Testing
v. Security Testing
vi. Scalability Testing
vii. Benchmark Testing
viii. Reliability Testing

7.6 Black Box Testing:

Testing without knowledge of the internal workings of the application. Focuses on input and
output.[22]

a) Appearance Of The Logo:

Test Case Title Appearance of the Logo

Priority High

Preconditions 1. The application is launched and


accessible.
2. The user has appropriate
permissions.

Test Steps 1. Navigate to the homepage of the


application.
2. Locate the logo at the top-left
corner.

Page 75 of 101
Expected Result 1. The logo should be prominently displayed
and easily recognizable.
2. The logo should accurately represent the
theme of the application.
3. The logo should be clear and not distorted
or pixelated.
4. The logo should be aligned properly and
positioned in the designated area.
5. The logo should have appropriate
dimensions, maintaining a balanced
appearance.
6. The logo should have good color contrast
with the background.
7. The logo should not contain any spelling
mistakes or visual inconsistencies.
8. Clicking on the logo should
navigate the user to the homepage or
perform a relevant action.

Actual Result Logo is clearly visible and is aligned


properly.

Pass/Fail Criteria If result is according to the expected


result then it is said to be “PASS”
else “Fail”.

Status Pass
TABLE 7.1 Appearance Of The Logo

7.7 White Box Testing:

Testing with knowledge of the internal workings of the application. Focuses on code structure.

b) Diagnose Button:

Test Case Title Diagnose Button Functionality


Verification

Priority High

Preconditions 1. The application is launched and accessible.


2. The user has uploaded an image of a plant
with suspected disease.
Test Steps 1. Navigate to the page where the image is
uploaded for diagnosis.
2. Ensure that the uploaded image is visible
and correctly displayed.
Page 76 of 101
3. Locate the "Diagnose" button on the
interface.
4. Click on the "Diagnose" button.

Expected Result 1. The uploaded image should be clearly


visible and identifiable.
2. The "Diagnose" button should be
prominently displayed and clickable.

3. After clicking on the "Diagnose"


button, the application should initiate
the disease detection process.

4. The application should analyze the


uploaded image for plant diseases

.5. The application should provide a


diagnosis report indicating the detected
disease(s) along with relevant
information (e.g., disease name,
severity, recommended actions)

.6. The diagnosis report should be


displayed clearly and be easy to
understand.

Actual Result 1. Diagnose button is clickable


and is displayed prominently.
2. The button is initiating the
disease detection process.

Pass/Fail Criteria - Pass: The diagnosis report is generated


accurately and presented clearly.
- Fail: The diagnosis report is incorrect,
incomplete, or not displayed as
expected.

Status Pass
TABLE 7.2 Diagnose Button

c) Upload Button

Test Case Title Upload Button Functionality


Verification

Priority High

Preconditions 1. The application is launched and accessible.


Page 77 of 101
Test Steps 1. Navigate to the page where the image can
be uploaded for diagnosis.
2. Ensure that the option to upload an image is
available and clearly indicated.
3. Locate the "Upload" button on the
interface.
4. Click on the "Upload" button.

Expected Result 1. The option to upload an image should be


clearly visible and easily accessible.
2. The "Upload" button should be prominently
displayed and clickable.
3. After clicking on the "Upload" button, a file
dialog should open allowing the user to select
an image file from their device.
4. The selected image file should be uploaded
successfully to the application.
5. Upon successful upload, the uploaded
image should be displayed on the
interface for further processing.

Actual Result Upload button is clickable and is


displayed prominently. The
button is initiating the plant
identification process.

Pass/Fail Criteria - Pass: The diagnosis report is generated


accurately and presented clearly.
- Fail: The diagnosis report is incorrect,
incomplete, or not displayed as
expected.

Status Pass
TABLE 7.3 Upload Button

d) Correct Data Retrieval

Test Case Title Correct Data Retrieval from Stored Data

Priority High

Preconditions 1. The application is deployed and accessible


2. The database is populated with relevant test data.

Page 78 of 101
Test Steps 1. Navigate to the section requiring data retrieval from stored
data.
2. Initiate the action to retrieve specific data (e.g., accessing user
profile, querying disease information).
3. Verify that the system sends a request to the database for data
retrieval.
4. Check that the retrieved data matches the expected data
stored in the database.

Expected Result 1. The system successfully sends a request to the database without
errors.
2. The retrieved data matches the expected data stored in the
database.
3. The retrieved data is accurate, complete, and consistent
with the data stored in the database.

Actual Result The system successfully retrieves correct data from the
database without errors.

Pass/Fail Criteria - Pass: The system successfully retrieves correct data from the
database without errors.
- Fail: The system encounters errors during data retrieval, or
the retrieved data does not match the expected data stored
in the database.

Status Pass
TABLE 7.4 Correct Data Retrieval

e) Correct Disease and Plant Identification

Test Case Title Correct Disease and Plant Identification

Priority High

Preconditions 1. The application is deployed and accessible.


2. The database is populated with relevant
test data.
Test Steps 1. Upload an image of a plant with suspected
disease to the application.
2. Initiate the disease detection process.
3. Verify that the software accurately
identifies the plant species.
4. Verify that the software correctly
identifies the disease affecting the
plant.

Page 79 of 101
Expected Result 1. The software successfully processes the
uploaded image without errors.
2. The identified plant species matches the
actual plant species depicted in the image.
3. The identified disease matches the
actual disease affecting the plant.

Actual Result The system successfully identifies plant


and detect disease.

Pass/Fail Criteria -Pass: The software correctly identifies both


the plant species and disease without errors.
- Fail: The software fails to accurately
identify the plant species or disease, or
encounters errors during identification.

Status Pass
TABLE 7.5 Correct Disease and Plant Identification

7.8 FUNCTIONAL TESTING:

Test Case Pre- Test Steps Expected Actual Status


Scenario Condition Result Result

User click on User should Click on GUI Work Image Pass


upload button have open our upload effectively. uploaded
URL. button. successfully

User click on User should Click on Image will Image Pass/invalid


browse image be on browse image be browse uploaded image
homepage button successfully.

User click on User should Click on Disease will Disease pass


diagnose be on the diagnose be detected detected
button relevant page button successfuly.

User click on User should Click on view cure User will be Pass
view result be on the result button instructions. provided by
relevant page Will be cure
provided to instructions.
the user
TABLE 7.7 Functional Testing

Page 80 of 101
7.8.1 CONCLUSION OF FUNCTIONAL TESTING

In functional testing we have test all the core functionally of our web application. All different
test cases where tested to assess how well our web application is working. It confirms that key
features operate smoothly, providing a reliable and user-friendly experience. Users can
effectively upload and browse images, diagnose plant diseases, and view results with cure
instructions.[24] Each test case, including image uploads, disease detection, and result viewing,
passed successfully, demonstrating that the software’s GUI is responsive and the core
functionalities are robust. This ensures users can easily navigate the software, upload images,
receive accurate disease diagnoses, and obtain valuable management advice, thereby supporting
effective plant care and sustainable agricultural practices.[25]

7.9 ERROR HANDLING TESTING

In Error Handling testing, test are run to understand areas where potential errors can be hidden.

ERROR HANDLING TESTING

Interface Bug Description Fix / Not Fix

Image When users browses and uploads an image as Fix


input.

Prediction User uploads a plant image to get detection Fix


from the plant Disease Detection Software

Result The software identifies the plant and give cure Fix
instructions as installed in the software.
TABLE 7.7 Error Handling Testing

7.9.1 CONCLUSION OF ERROR HANDLING TESTING

In error handling testing, test where conducted to find out potential bug in the system. The first
interface was image. Next interface was prediction of the plant disease, where no bug was found,
as he system is predicting the disease accurately.The system is generating accurate result with
cure instructions.[26]

Page 81 of 101
7.10 REGRESSION TESTING

Regression testing of our web application is done each time we made changes to codebase. [27]
Our web application is tested many times.

REGRESSION TESTING

Test Case Test Case Test Case Expected Actual Status


Scenario Steps Data Result Result

User upload an User selects Image Saved in Stored in Pass


image input on input in form stored on Database. directory and
browser button of image. device database.
or upload
image button.

User upload an User clicks Image Detection of Name of Pass


image input for on detect stored on Disease Disease
plant disease button to device
detection by start the
detect button. detection
process.

User selects Load image Image Detection of Disease name Pass


low quality from device. stored on disease is and cure
image. device. difficult. instructions is
displayed.

User selects a Load image Image Detection of Disease name Pass


high quality from device. stored on disease is and cure
image. device. possible. instructions is
displayed.

7.10.1 CONCLUSION OF REGRESSION TESTING

In regression testing, we tested our application multiple times using different inputs to check
whether they failed or passed. In the first test case we check whether an image is uploaded
Page 82 of 101
successfully or not. In the second test we check whether our system predicts accurately or not. In
the last two test we check whether there is any error generated by the quality of image uploaded
by the user.[27]

7.11 INTEGRATION TESTING

Integration testing is carried out by combining multiple components of a software as a group and
then testing them.[28] The goal of integration testing is to ensure that all components work
properly when integrated together.[58]

INTEGRATION TESTING

Test Cases Attributes Description Expected Result Result

Browse button / Accept In our web app Image uploaded Pass


upload image our targeted successfully.
button with integration is
image input. upload image
from user’s
device.

Detect Disease Succeed We integrate the It detects the Pass


button disease detection plant disease.
integrated with through detect
image browsed disease button.

Integration of Succeed We integrate It provides cure Pass


view result view result instructions
button with cure button with cure
instructions and instructions on
disease the same page.
detection

Integeration of succeed We integrate It takes the user pass


logo with home logo with to homepage
page homepage
TABLE 7.9 Integration Testing

Page 83 of 101
7.11.1 CONCLUSION OF INTEGRATION TESTING

In integration testing, we tested how different components of our web application perform when
integrated together. The first test case, we tested that whether image input button works properly
as expected. The second test case, we tested whether our plant disease detection system work
properly or not. In third and last we tested if the view result button gives accurate instructions
and logo is linked with the homepage.[29]

7.12 UNIT TESTING

Unit testing is used to test out single unit or component of software in isolation. The main reason
for performing unit test is to verify whether or not each unit or components works as expected.

UNIT TESTING

Buttons Attribute Description Expected Status


Result

Browser Null Click the button. User will Pass


image/upload upload the
image image through
the browse
button.

View Result Null Click the button. The disease will Pass
be detected
further more
user will be
provided by
cure instructions
and disease
sevearity

Diagnose Now Null Click the button Disease will be pass


detected

Logo Null Click the button User will pass


reached at
homepage

Page 84 of 101
7.12.1 CONCLUSION OF UNIT TESTING

These successful unit tests demonstrate the reliability and effectiveness of


the user interface, ensuring that users can easily navigate the software,
upload images, and receive accurate disease diagnoses and
recommendations.[59]The seamless functionality of these features enhances
the overall user experience, contributing to the software's goal of promoting
sustainable agricultural practices and efficient plant care. It confirm that key
functionalities, such as image uploading, result viewing, disease diagnosis,
and navigation to the homepage, are working as expected[30]. Each tested
button has passed, indicating that the user interface components are
functioning correctly and providing the desired outcomes.

7.13 DECISION TESTING

DECISION TESTING

Test Case Conditions Input Expected Actual Status


to execute Result Result

Browse: User Condition 1: Image in Browse Browse Pass


selects image if user (.jpg) images in plant
from the browses for device. images and
device. image. display on
the screen.

Select Image: Condition 1: Selected Appear on Appear on Pass


User selects If selected image file. screen. screen and
an image to image is of will go for
upload. type (.jpg) further next
steps.

Condition 2: Selected Appear on Appear on Pass


If selected image file. screen. screen and
image is of will go
type (.png) further next
steps.

Condition 3: Selected Give error Give Pass


If selected image file. of invalid instruction
iamge is not image to uplaod
of type (.jpg) image of
or (.png). valid
extension

Upload plant Condition 1: Selected Appear on Appear on Pass


image If selected image file. screen. screen and
image is of will go

Page 85 of 101
valid type. further next
steps.
TABLE 7.10 Unit Testing

7.14 TECHNICAL INTERFACES:

7.14.1 HOME PAGE:

7.14.2 UPLOAD IMAGE PAGE:

Page 86 of 101
7.14.3 RESULT PAGE (REPORT GENERATE)

Page 87 of 101
CHAPTER VIII

APPENDICES

Appendices A:

To properly capture plant disease detection, enhance, and analyze photos and identify patterns or
anomalies suggestive of plant diseases, image processing is an essential component. This
program may identify illness patterns by learning from labeled datasets when it is integrated with
machine learning, especially deep learning techniques. This allows for reliable disease
categorization in new photos.[31] Image processing-enabled continuous monitoring makes it
possible to identify diseases early on, which enables prompt interventions to stop their spread
and enhances crop management techniques. Additionally, the software's capacity to manage
intricate and massive agricultural datasets is improved by the combination of image processing
and machine learning capabilities, offering farmers and agronomists insightful information on
how to maximize crop health and productivity.[32]

Appendices B:

Page 88 of 101
The main goal of the project is to create a complete software program that targets farmers,
gardeners, and plant enthusiasts and smoothly combines plant identification, disease detection,
and preventive measures. Its main goal is to enable users to recognize plants correctly and take
good care of them. By using cutting-edge picture recognition algorithms and disease detection
methods, the program aims to support sustainable agricultural methods. In order to decrease crop
losses and lessen the need for chemical interventions, it intends to enable early disease diagnosis
and provide timely information for preventive measures, thereby supporting more sustainable
agricultural practices overall. [33]

Appendices C:

The confluence of plant identification and disease detection has been addressed in a number of
applications, providing important new information in this field. Particularly, Plant Vision is
unique as a smartphone application that combines image analysis for disease diagnosis with deep
learning for plant species recognition. Its primary features include the ability to identify plants in
real time and diagnose diseases, which makes it a useful model for our study.[34] Both desktop
and mobile platforms have successfully implemented deployment tactics; the primary cost
factors are related to upgrades.

Appendices D:

Accurately identifying and diagnosing agricultural illnesses depends heavily on the incorporation
of sophisticated algorithms in plant disease detection software. Convolutional Neural Networks
(CNNs), Feature Extraction Algorithms, Image Segmentation procedures, and Data
Preprocessing methods are notable among the various algorithms employed for their noteworthy
contributions. [35]Together, these methods allow the program to divide pertinent regions, extract
useful features from plant photos, and preprocess data for improved model performance. Let's
examine each of these algorithmic elements in more detail:[60]

Appendices E:

The right hardware and software selection is essential to the creation of any application. The
scalable and user-friendly image processing method used in this project allows for the thorough
Page 89 of 101
viewing of photos from several angles. The dataset performs well in tasks involving image
processing and segmentation. Python is the preferred programming language because of its
large library, which makes it easier to implement the necessary capabilities in an effective
manner.[36]

Appendices F:

1. Component Diagram Overview

Components: User Interface, Image Pre-processing Module, Feature Extraction Module,


Disease Detection Module, Database, and Notification System.

Description:

 The User Interface allows users to upload images and view results.

 The Image Pre-processing Module enhances image quality.

 The Feature Extraction Module extracts relevant features from the images.

 The Disease Detection Module classifies and identifies diseases.

 The Database stores plant and disease information.

 The Notification System sends alerts and recommendations to users.

Appendices G:

1. Operational Workflow Diagram

Steps:

 User captures or uploads a plant image.

 Image is pre-processed to enhance quality.

 Features are extracted from the image.

 Extracted features are compared with the database.

 Disease is detected and classified.

 Results are displayed to the user.

 Notifications are sent with recommendations.

Appendices H:

1. Class Diagram Overview

Page 90 of 101
 Classes: User, Image, Plant, Disease, FeatureExtractor, Classifier, Database, Notification.

 Relationships:

 User interacts with Image.

 Image contains Plant and Disease.

 Feature Extractor processes Image.

 Classifier uses FeatureExtractor.

 Database stores Plant and Disease data.

 Notification provides feedback to User.

Appendices I:

 Object Diagram Overview

 Instances:

 User: user1

 Image: PlantImage01

 Plant: Tomato

 Disease: Blight

 FeatureExtractor: Extractor01

 Classifier: Classifier01

 Database: PlantDiseaseDB

 Notification: Notification01

 Relationships: Specific instances of classes interacting with each other.

Appendices J:

Use Case Diagram Overview

1. Actors: User, Admin

2. Use Cases:

Page 91 of 101
 Upload Image

 View Disease Diagnosis

 Manage Database

 Receive Notifications

 View Plant Care Instructions

3. Relationships:

 User performs Upload Image and View Disease Diagnosis.

 Admin performs Manage Database.

 Both User and Admin receive Notifications and view Plant Care Instructions.

Appendices K:

1. Activity Diagram Overview

 Activities:

 Start

 Capture/Upload Image

 Pre-process Image

 Extract Features

 Compare Features with Database

 Classify Disease

 Display Results

 Send Notifications

 End

2. Flow: Sequential flow of activities from start to end.

Appendices L: Software Workflow

1. Workflow Overview

 The software follows a sequential process from image input to disease detection and
recommendation.

Page 92 of 101
 Steps:

 User uploads or captures a plant image.

 Image undergoes pre-processing to enhance quality and clarity.

 Relevant features are extracted from the image.

 Extracted features are compared with the database to identify diseases.

 The disease is classified, and results are displayed to the user.

 Users receive recommendations for disease management and preventive measures.

 Description: This workflow ensures a systematic analysis of plant images and provides users
with actionable insights to address detected diseases.

Appendices M: Image Processing Algorithms

1. Image Pre-processing Algorithms

 Resize: Adjusts image dimensions for consistency.

 Noise Reduction: Filters out noise to improve image clarity.

 Color Correction: Enhances color balance for accurate analysis.

 Contrast Enhancement: Improves image contrast for better feature extraction.

 Edge Detection: Identifies edges of leaves for feature extraction.

2. Feature Extraction Algorithms

 Texture Analysis: Examines texture patterns using Gray Level Co-occurrence Matrix
(GLCM) or similar techniques.

 Color Histogram Analysis: Analyzes color distribution within the image.

 Shape Analysis: Identifies shape characteristics of plant features.

 Statistical Analysis: Calculates statistical properties of image features.

3. Disease Detection Algorithms

 Convolutional Neural Networks (CNNs): Deep learning models trained to classify diseases
based on extracted features.

 Support Vector Machines (SVMs): Machine learning models for disease classification.

1. Trigger Events
Page 93 of 101
 Disease Detection: Notify users upon successful detection of diseases in uploaded images.

 Recommendation Updates: Send periodic recommendations for disease management and


preventive measures.

 Database Updates: Notify users about new plant or disease data added to the database.

Page 94 of 101
CHAPTER IX

ACHIEVEMENTS

Page 95 of 101
CHAPTER X

FUTURE ENHANCEMENT

Due to the feasibility and the potential of the Plant Disease Detection Software, several
advancements can be made in order to modify the software and maximize the performance.
Another improvement would be to expand the list of diseases that the software is able to identify
from eight original diseases including Septoria Leaf Spot, yellow leaf curl virus, Black Maeasles,
Isariopsis Leaf Spot, Apple Scab, Cedar Apple Rust, Early Blight, Late Blight . To achieve this
expansion, it would be necessary to feed our AI models with more extensive and richer datasets
that would allow for a better and more extensive coverage of plant diseases. Also, we can
improve general user access and involvement by creating a multi-user website for users that can
create their own accounts to save their history. The above-mentioned approach would have been
able to allow for better management of the disease, and progress as well. Moreover, it would be
beneficial to create native applications for iOS and Android platforms, which would allow users
to experience it easily and naturally. These apps could take advantage of advanced features such
as high-definition cameras and touch screens, which would assist in capturing images of plant
leaves more accurately and improving the interaction of users with the apps. However, since user
data is sensitive information, it is essential to employ enhanced security features like the end-to-
end encryption and multi-factor authentication. Also, to add a certain dynamic to the application,
features of user competitions could be incorporated to promote who among them could best
handle the management and wellbeing of plants. All these improvements put together will help
make our software the best by providing users with efficient and sustainable means to practice
agriculture.

REFERENCES

[1] H. Sabrol and S. Kumar, "Recent studies of image and soft computing techniques for plant
disease recognition and classification," International Journal of Computer Applications, vol.
126, no. 1, 2015.
[2] A. Meunkaewjinda, P. Kumsawat, K. Attakitmongcol, and A. Srikaew, "Grape leaf disease
detection from color imagery using hybrid intelligent system," in 2008 5th international

Page 96 of 101
conference on electrical engineering/electronics, computer, telecommunications and information
technology, 2008, vol. 1: IEEE, pp. 513-516.
[3] Sankaran S, Mishra A, Ehsani R, Davis C. A review of advanced techniques for detecting
plant diseases. Computers and electronics in agriculture. 2010 Jun 1;72(1):1-3
[4] Nagaraju, Mamillapally, and Priyanka Chawla. "Systematic review of deep learning
techniques in plant disease detection." International journal of system assurance engineering and
management 11, no. 3 (2020): 547-560.
[5] Martinelli F, Scalenghe R, Davino S, Panno S, Scuderi G, Ruisi P, Villa P, Stroppiana D,
Boschetti M, Goulart LR, Davis CE. Advanced methods of plant disease detection. A review.
Agronomy for Sustainable Development. 2015 Jan;35:1-25.
[6] Shoaib, Muhammad, Babar Shah, Shaker Ei-Sappagh, Akhtar Ali, Asad Ullah, Fayadh
Alenezi, Tsanko Gechev, Tariq Hussain, and Farman Ali. "An advanced deep learning models-
based plant disease detection: A review of recent research." Frontiers in Plant Science 14 (2023):
1158933.
[7] Harakannanavar, Sunil S., et al. "Plant leaf disease detection using computer vision and
machine learning algorithms." Global Transitions Proceedings 3.1 (2022): 305-310.
[8] De Luna, R. G., Dadios, E. P., & Bandala, A. A. (2018, October). Automated image
capturing system for deep learning-based tomato plant leaf disease detection and recognition. In
TENCON 2018-2018 IEEE Region 10 Conference (pp. 1414-1419). IEEE.
[9] Golhani, K., Balasundram, S. K., Vadamalai, G., & Pradhan, B. (2018). A review of neural
networks in plant disease detection using hyperspectral data. Information Processing in
Agriculture, 5(3), 354-371.
[10] Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Ullah, A., Alenezi, F., Gechev, T., Hussain,
T. and Ali, F., 2023. An advanced deep learning models-based plant disease detection: A review
of recent research. Frontiers in Plant Science, 14, p.1158933.
[11] Moshou, Dimitrios, Cedric Bravo, Roberto Oberti, Jon West, Luigi Bodria, Alastair
McCartney, and Herman Ramon. "Plant disease detection based on data fusion of hyper-spectral
and multi-spectral fluorescence imaging using Kohonen maps." Real-Time Imaging 11, no. 2
(2005): 75-83.
[12] Devi, P. R. (2021, August). Leaf Disease Detection Using Deep Learning. In 2021 Second
International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp.
1797-1804). IEEE.
[13] Harakannanavar, Sunil S., et al. "Plant leaf disease detection using computer vision and
machine learning algorithms." Global Transitions Proceedings 3.1 (2022): 305-310.
[14] Tete, Trimi Neha, and Sushma Kamlu. "Plant Disease Detection Using Different
Algorithms." RICE. 2017.
[15] Jones, Lyle V., and Donald W. Fiske. "Models for testing the significance of combined
results." Psychological Bulletin 50.5 (1953): 375.
[16] Jones, Lyle V., and Donald W. Fiske. "Models for testing the significance of combined
results." Psychological Bulletin 50, no. 5 (1953): 375.
[17] Goetz, Christopher G., et al. "Movement Disorder Society‐sponsored revision of the Unified
Parkinson's Disease Rating Scale (MDS‐UPDRS): scale presentation and clinimetric testing
results." Movement disorders: official journal of the Movement Disorder Society 23.15 (2008):
2129-2170.
[18].Goetz, Christopher G., Barbara C. Tilley, Stephanie R. Shaftman, Glenn T. Stebbins,
Stanley Fahn, Pablo Martinez‐Martin, Werner Poewe et al. "Movement Disorder Society‐
sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS‐UPDRS): scale
presentation and clinimetric testing results." Movement disorders: official journal of the
Movement Disorder Society 23, no. 15 (2008): 2129-2170.

Page 97 of 101
[19].Hooda, Itti, and Rajender Singh Chhillar. "Software test process, testing types and
techniques." International Journal of Computer Applications 111, no. 13 (2015).
[20].Madsen, H. S. (1983). Techniques in Testing. Oxford University Press, 200 Madison Ave.,
New York, NY 10016 (ISBN-0-19-434132-1, $5.95)..
[21].Parshall, Cynthia G., Tim Davey, and Peter J. Pashley. "Innovative item types for
computerized testing." Computerized adaptive testing: Theory and practice (2000): 129-148.
[22].Duchastel, P. C. (1981). Retention of prose following testing with different types of tests.
Contemporary Educational Psychology, 6(3), 217-226.
[23].Constâncio, V., Nunes, S.P., Henrique, R. and Jerónimo, C., 2020. DNA methylation-based
testing in liquid biopsies as detection and prognostic biomarkers for the four major cancer types.
Cells, 9(3), p.624.
[24].Madsen, Harold S. Techniques in Testing. Oxford University Press, 200 Madison Ave.,
New York, NY 10016 (ISBN-0-19-434132-1, $5.95)., 1983.
[25].Sawant, Abhijit A., Pranit H. Bari, and P. M. Chawan. "Software testing techniques and
strategies." International Journal of Engineering Research and Applications (IJERA) 2, no. 3
(2012): 980-986.
[26].Bray, D.E. and McBride, D., 1992. Nondestructive testing techniques.
[27].Beizer, B., 1995. Black-box testing: techniques for functional testing of software and
systems. John Wiley & Sons, Inc..
[28].Nidhra, S. and Dondeti, J., 2012. Black box and white box testing techniques-a literature
review. International Journal of Embedded Systems and Applications (IJESA), 2(2), pp.29-50.
[29].Do H, Elbaum S, Rothermel G. Supporting controlled experimentation with testing
techniques: An infrastructure and its potential impact. Empirical Software Engineering. 2005
Oct;10:405-35.
[30].Balci, Osman. "Validation, verification, and testing techniques throughout the life cycle of a
simulation study." Annals of operations research 53 (1994): 121-173.
[31].Vishnoi, V. K., Kumar, K., & Kumar, B. (2021). Plant disease detection using
computational intelligence and image processing. Journal of Plant Diseases and Protection, 128,
19-53.
[32].Narayanasamy, P. Microbial Plant Pathogens-Detection and Disease Diagnosis:: Viral and
Viroid Pathogens, Vol. 3. Vol. 3. Springer Science & Business Media, 2010.
[33].Vishnoi, Vibhor Kumar, Krishan Kumar, and Brajesh Kumar. "A comprehensive study of
feature extraction techniques for plant leaf disease detection." Multimedia Tools and
Applications 81, no. 1 (2022): 367-419.
[34].Rizk H. Automated early plant disease detection and grading system: development and
implementation.
[35] Geetharamani, G. and Pandian, A., 2019. Identification of plant leaf diseases using a nine-
layer deep convolutional neural network. Computers & Electrical Engineering, 76, pp.323-338.
[36.]Barbedo JG, Koenigkan LV, Santos TT. Identifying multiple plant diseases using digital
image processing. Biosystems engineering. 2016 Jul 1;147:104-16.
[37.]Martinelli, Federico, Riccardo Scalenghe, Salvatore Davino, Stefano Panno, Giuseppe
Scuderi, Paolo Ruisi, Paolo Villa et al. "Advanced methods of plant disease detection. A review."
Agronomy for Sustainable Development 35 (2015): 1-25.
[38.]Bhise, N., S. Kathet, S. Jaiswar, and Amarja Adgaonkar. "Plant disease detection using
machine learning." International Research Journal of Engineering and Technology (IRJET) 7, no.
7 (2020): 2924-2929.
[39Review of Computer Vision Techniques for the Analysis of Plant Diseases," by N. Barbedo,
2016.
[40] Deep Learning-Based Image Analysis for Plant Disease Detection, by Y. Jiang et al., 2018.

Page 98 of 101
[41] Automatic Detection of Plant Diseases Using Machine Learning Techniques, by P. Dey et
al., 2019.
[42]A Survey on Deep Learning Techniques for Plant Disease Detection and Classification, by
M. Sharma et al., 2020.
[43] Computer Vision-Based Tomato Disease Detection Using Convolutional Neural Networks,
by S. H. Khan et al., 2017.
[44]"Mobile-Based Deep Learning Model for Real-Time Plant Disease Detection," by R. P.
Singh et al., 2021.
[45]"A Comparative Study of Machine Learning Algorithms for Plant Disease Identification," by
A. K. Mishra et al., 2018.
[46]"Real-Time Detection and Classification of Plant Diseases Using a Deep Learning
Framework," by J. Zhang et al., 2019.
[47]"Automated Detection of Tomato Diseases Using Image Processing and Machine Learning
Techniques," by A. A. Selvaraj et al., 2019.
[48]"Deep Learning Approaches for Plant Disease Detection and Classification: A Review," by
S. M. A. Karim et al., 2021.
[49]"Development of a Smartphone Application for Plant Disease Diagnosis," by H. N. Phoulady
et al., 2020.
[50]"A Review on Deep Learning Techniques for Plant Disease Detection and Diagnosis," by S.
S. Hossain et al., 2020.
[51]"Fruit Disease Detection Using Deep Learning: A Review," by A. Akhtar et al., 2021.
[52]"Real-Time Detection of Potato Diseases Using Image Processing and Machine Learning,"
by G. S. R. Naik et al., 2018.
[53]"A Comparative Study of Deep Learning Models for Tomato Disease Detection," by R. K.
Gupta et al., 2020.
[54]"Mobile-Based Plant Disease Detection Using Convolutional Neural Networks," by V. S.
Patel et al., 2021.
[55]"Transfer Learning-Based Approach for Early Detection of Wheat Diseases," by S. K. Das et
al., 2019.
[56]"A Survey on Computer Vision Techniques for Plant Disease Detection," by S. Deb et al.,
2021.
[57]"Integration of Machine Learning and IoT for Smart Agriculture: A Review," by N. K.
Gupta et al., 2020.
[58]"Remote Sensing Techniques for Early Detection of Crop Diseases: A Review," by P. Dutta
et al., 2019.
[59]"Automated Detection of Crop Diseases Using UAV Imagery: A Review," by A. M. Singh et
al., 2020.
[60]"A Review on Recent Advances in Deep Learning Techniques for Plant Disease Detection,"
by A. N. Jadhav et al., 2021.

Page 99 of 101

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy