Plant Report
Plant Report
Supervised by
ST-13, Block 7, Gulshan-e-Iqbal, Abul Hasan Isphahani Road, Opposite Safari Park, P.O. Box 75300,
(2)20b-061-se
(3)20b-064-se
(4)20b-088-se
(1)…………………………….. ……………………..
(2)……………………………. …………………….
(3)…………………………….. …………………….
i
Acknowledgments
We sincerely first of all thanks to Almighty Allah and everyone who has contributed
to the "Plant Recognition and Disease Detection Software" project's successful
completion. It has taken teamwork to complete this project. We would like to express
our gratitude to the team members whose commitment and knowledge were essential
to the creation of this product. We appreciate the contributions made by the scholarly
and scientific communities to the fields of plant pathology, machine learning, and
computer vision. The project was made possible by the current information. Finally,
we would like to express our gratitude to the users farmers, gardeners, and plant
enthusiasts for their possible usage of this software. An application designed with
your requirements and expectations in mind has been developed with the goal of
improving your experience with plant care and disease prevention was developed
based on your requirements and expectations. The "Plant Recognition and Disease
Detection Software" project has been fashioned by the collaborative spirit and group
efforts that are honored with this commendation
ii
Abstract
The "Plant Recognition and Disease Detection Software" project offers a substantial
improvement in the field of plant care and disease prevention. This software program
attempts to close the gap between plant care and disease prevention with an emphasis
on empowering people, improving user experiences, and supporting sustainable
agriculture. The study expands on the impressive advancements in machine learning
and computer vision methods for plant identification and disease detection. By
providing users with a cutting-edge tool for precise plant identification, illness
detection, and efficient preventive measures, the project seeks to transform plant care
practices. This project contributes to the larger objective of improving plant
biodiversity and supporting sustainable agriculture.
iii
TABLE OF CONTENTS
Submission Performa......................................................................................................i
PLANT RECOGNITION AND DISEASE DETECTION SOFTWARE......................i
List of tables.................................................................................................................vii
List of figures..............................................................................................................viii
Application Programming Interface Units:...................................................................ix
CHAPTER I...………………………………………………………………………...1
1.1 Introduction…………………………………………………………...……………
1
1.2 Role Of Image Processing:....................................................................................11
1.3 System Diagram:....................................................................................................11
1.4 Tabular Form Of Disease Detection:.....................................................................12
1.5 Subprojects:...........................................................................................................13
1.6 Aim Of Project:......................................................................................................14
1.7 Statement Of Problem And Solution:....................................................................14
1.7.3 User Interface Design..........................................................................................15
1.7.4 Database Development:......................................................................................15
1.8 Expected Outcome:................................................................................................15
1.9 Conclusion:............................................................................................................16
CHAPTER II….…………...………………………………………………………….2
2.1 Introduction………………………………………………………………...………
2
2.3 Similar Application………………………………………………………..….......10
2.4 Current Work………………………………………………………………..........11
2.5 Related Work:........................................................................................................22
2.6 Gaps in Current Knowledge:..................................................................................22
2.7 List Of Previous Similar Software:........................................................................22
2.8 Algorithms:............................................................................................................23
2.9 Main Features And Technical Interfaces:..............................................................23
2.10 Technical Interfaces.............................................................................................24
CHAPTER III...…………...………………………………………………………...18
Introduction............................................................................................................26
3.2
Hardware………………………………………………………………………….18
3.3 Software……………………………………………………………...…………...18
iv
3.4 Libraries:................................................................................................................27
3.5 Algorithms:............................................................................................................28
3.6 Requirements..........................................................................................................21
3.7 Conclusion:............................................................................................................29
CHAPTER 1V………...……………………………………………………………..22
4.1 Introduction…………………………………...
…………………………………..22
4.2usecase Diagram……………………………………..……………………………22
4.3 Activity Diagram:..................................................................................................31
4.4 System Diagram:....................................................................................................32
4.5 Class Diagram:.......................................................................................................33
4.6 Entity Relation Diagram………………………………………………………….27
4.7 Sequence Diagram………………………………………………………………..28
4.7.1 Description:.........................................................................................................37
4.8 Object Diagram:.....................................................................................................38
4.9 Component Diagram:.............................................................................................39
4.10 Deployment Diagram:..........................................................................................41
4.11 Operational Diagram............................................................................................42
CHAPTER V…………………………………….…………………………..……....37
5.1 Introduction…………………………………………………………………...
…..37
5.2 Convolutional Neural Network………………………………………………...…
37
5.3 Algorithm…………………………………………………………………………37
5.4 Pseudo Code…………………………………………………………………...
….38
5.5 Complexities Of The Algorithm Worst Case.........................................................45
5.6 Complexities Of The Algorithm Best Case............................................................46
5.7 Comparison Of Algorithms:...................................................................................46
5.8 Algorithms:.............................................................................................................47
5.8.4 Data Preprocessing:.............................................................................................48
5.9 Conclusion:.............................................................................................................48
CHAPTER VI……………………………….....……………………………...…….43
6.1 Front End:...............................................................................................................49
6.2 Backend:.................................................................................................................53
v
CHAPTER VII.............................................................................................................72
Testing..........................................................................................................................72
7.1 Introduction:...........................................................................................................72
7.2 Objectives Of Testing:...........................................................................................72
7.3 Types Of Testing:...................................................................................................72
7.4 Functional Testing:.................................................................................................73
7.4.1 Types And Techniques Of Functional Testing:..................................................73
7.5 Non Functional Testing:.........................................................................................74
7.5.1 Types And Techniques Of Non Functional Testing:..........................................74
7.8 Functional Testing:.................................................................................................79
7.8.1 Conclusion Of Functional Testing......................................................................80
7.9 Error Handling Testing...........................................................................................80
7.9.1 Conclusion Of Error Handling Testing...............................................................81
7.10 Regression Testing..............................................................................................81
7.10.1 Conclusion Of Regression Testing....................................................................82
7.11 Integration Testing...............................................................................................82
7.11.1 Conclusion Of Integration Testing....................................................................83
7.12 Unit Testing..........................................................................................................83
7.12.1 Conclusion Of Unit Testing..............................................................................84
7.13 Decision Testing...................................................................................................84
7.14.1 Home Page:.......................................................................................................86
7.14.2 Upload Image Page……………………………………………………………80
7.14.3 Result
Page…………………………………………………………………….81
CHAPTER VIII……………………………………………………………………..82
Appendices...................................................................................................................82
CHAPTER IX.............................................................................................................94
Achievements……..………………………………………………………………….88
CHAPTER X...............................................................................................................95
Future Enchancement
vi
List of tables
vii
List of figures
viii
List of symbols and Units
Symbols:
DB: Database
ix
INTRODUCTION
INTRODUCTION
The project is about “Plant Recognition and Disease Detection Software”. The
introduction establishes the scope of the project by describing the domain of the
problem and its importance. It effectively conveys the objective of the software
app to bridge the gap between plant care and the prevention of disease through
cutting various technologies. It's all about empowering users, making the user
experience better, and making a difference in sustainable agriculture Crop yield
and health are crucial in today's agricultural environment. However, obtaining
ideal yields is seriously threatened by plant diseases, which has an effect on both
food security and economic stability. Conventional disease detection techniques
are time-consuming, expensive, and prone to human error since they frequently
depend on manual inspection by skilled professionals. Technological
developments have opened the door for creative solutions to these problems, such
software that detects plant diseases.
This project's main goal is to reduce crop losses from diseases in order to increase
agricultural production and sustainability. This program encourages a more
ecologically friendly method of farming by supporting proactive disease control
and lowering the need for chemical pesticides. This report's subsequent parts will
address the software's technical architecture, the AI models used, data gathering
Page 10 of 101
and preprocessing techniques, user interface design, and any possible effects on the
agriculture sector. [1]
Page 11 of 101
DESCRIPTION:
The above diagram illustrates the sequential flow of the system. At first, the user
inputs an image, initiating the image pre-processing phase. In the process, relevant
leaf features are extracted. These extracted features are then compared with the
dataset. The system uses this comparison to classify the image and predict which
disease class the data provided belongs to. [38]
Page 12 of 101
1.5 SUBPROJECTS:
The subprojects of our project Plant Recognition And Disease Detection Software
consist the following:
In this subproject, the team will delve into the implementation of disease detection
algorithms. The goal is to analyze plant images for visual symptoms and identify
common plant diseases. The approach includes leveraging CNNs and possibly
image segmentation algorithms for precise disease localization. Recommendations
for preventive measures and management strategies will be integrated into the
system.
Page 13 of 101
1.5.4 Database Development:
Comprehensive databases for plant species information and diseases are crucial
components. [40]This subproject involves the development and integration of
these databases. The plant database will contain details such as names,
characteristics, and care instructions, while the disease database will include
symptoms and recommended management strategies.
Page 14 of 101
Scope of Effort: This project aims to address the accuracy by implementing a robust
system that can analyze plant based on input images with a high degree of precision.
Problem Definition: Current applications struggle with accurate and timely disease
detection in plants, hindering effective preventive measures.
Scope of Effort: The project addresses this by ensuring the development of a user-
friendly interface. Users should be able to easily capture or upload plant images,
view identification results, and navigate through the software seamlessly. [42]
It is anticipated that the application of plant disease detection software will result in a
number of noteworthy consequences that enhance agricultural sustainability and
output. The software's main goal is to accurately and early identify plant illnesses,
Page 15 of 101
which is essential for prompt management and action. Farmers can apply targeted
treatments to minimize crop losses and increase yields by diagnosing diseases early
on. Time and labor costs can be saved by using the software's user-friendly tool, which
lessens the requirement for in-depth field inspections and expert expertise. A wide
spectrum of users can take advantage of the software's features because of its
accessibility and ease of use on mobile devices and other digital platforms.
1.9 CONCLUSION:
These types of application are used in variety of areas which increases efficiency
of modern age, also keep the plant secure from the disease. This applications make
you modernize or efficient. This app help to reduce so many bacterial diseases
from the plant. It is user friendly app which is not only for the farm owners but
also for the people who has in gardening but does not know that how to keep safe
their plants from disease. We have compared different application features with
each other so we can add some more important feature in our plant disease
detection application. The previous application have some issues that why the
other new application are made so in most of them the data is already stored in the
database and it only shows the information and some application have detection
option. [4]
Page 16 of 101
BACKGROUND AND LITERATURE REVIEW
2.1 INTRODUCTION
The field of plant identification and disease detection has witnessed significant
advancements, particularly through the application of computer vision and machine
learning techniques. The literature review explores existing knowledge, highlighting the
state of the art in the domain and laying the groundwork for the project. These days, one of
the biggest concerns is the early diagnosis of plant diseases. The farm owner and the
gardeners is suffering significant losses as a result of the agricultural worth of their land
progressively declining daily. Less than 10% of farmers have formal education, so the
majority of them lack knowledge about proper plant cultivation techniques. Instead, they
rely on their own experience rather than a scientific approach, which is why most plants die
because their owners are unable to identify diseases and apply the appropriate pesticides. If
plant diseases are not treated when they first arise, the cost of creation may increase
significantly because the illness has the potential to spread across the entire.[5]
The farming industry remains one of the most impactful in the worldwide market as well as
in food supply, yet it remains plagued by constant threats posed by diseases affecting crops.
Current methods used in disease diagnosis in plants involve the physical examination of the
plants which is usually time consuming, expensive, and prone to human influence.
Additionally, these approaches may not be easily implemented by farmers who have low
education standards and little capital to invest in their production. Therefore, the
requirement for better and much more effective and easily implemented methods and
techniques for early detection of plant diseases has fueled the scientific innovations in this
area. The use of computer vision and machine learning presents approaches that could be
Page 17 of 101
useful in solving these challenges by using algorithms that can automatically identify such
conditions as well as give timely, accurate diagnoses.
The application of image processing with machine learning techniques has become a
revolutionized instrument in plant disease diagnosis. Techniques like eliminating
background noises, standardization and erasing boundaries also contribute a lot towards
preparing good images for further analysis. These techniques help to emphasize the
apparent features, which can help identify diseases, the sample picture elements that are the
color, texture, and the shape of the plant parts. This pre-processed visual data are used to
feed the machine learning models especially the CNNs that are able to identify and
diagnose diseases from a given set of examples that have undergone training. Quantitative
studies have also revealed that these models can have fairly low error rates in diagnosis of
different diseases in plants that are captured in images and are therefore very useful in the
modern farming practice.
However, the authors pointed out several limitations in the present work and future research
on plant disease detection software, which are as follows: Another challenge is that the
captured images may not always be of good quality and environmental conditions might not
favor disease diagnosis. Furthermore, competition consists of creating new and detailed
annotated datasets for training machine learning models, while time-consuming. To address
these challenges, there is a constant update of the image processing algorithms and the
augmentation of the database to encompass different types of crops and disease states. The
idea is to spur collaborations between researchers, agricultural specialists and technology
Page 18 of 101
developers to design and develop sound, easy to use applications from the ground up that
incorporate the requirements of farmers worldwide.
The present development in image processing and machine learning techniques employed in
plant disease detection can be considered as advancement in agricultural science. The
information that reaches the farmers through proposed technologies helps them to diagnose
ailing crops early and prescribe appropriate treatments that improve farming productivity.
The ultimate aim of such innovations is not only to increase the yield but also to develop a
sustainable way of farming that does not involve heavy use of chemical pesticides. To sum it
up, this literature review establishes the need for interdisciplinary research and use of
advanced technologies in solving one of the major problems affecting agriculture.
Several applications have addressed the intersection of plant identification and disease
detection, contributing valuable insights. Notable among them is Plant Vision, a mobile
application that combines deep learning for plant species recognition and image analysis
for disease detection. The main features include real-time identification and disease
diagnosis, making it a relevant reference for our project. Deployment strategies have
shown success in both desktop and mobile environments, and cost considerations revolve
around maintenance and updates.
2.3.1 Plantix:
Plantix is an app designed in 2015 for farmers and gardeners to diagnose plant diseases
and nutrient deficiencies. Users can upload images of their plants, and the app utilizes
image recognition and machine learning algorithms to identify diseases and provide
suitable treatment recommendations. It is an application based app which is only based on
android mobiles.
Leaf Doctor is another application that utilizes image processing to identify plant diseases.
It allows users to take pictures of affected leaves, and the app then analyzes the images to
diagnose diseases and suggests potential solutions. It is only usable for IOS systems.[45]
Page 19 of 101
2.3.3 Croprotect:
Croprotect is an online UK based platform that offers real-time disease and pest
information for crops. It provides comprehensive information on various diseases, pests,
and their management in agriculture, assisting farmers in disease diagnosis and
management.
2.3.4 Agrio:
Agrio is not basically free mobile application but it firstly gives 30 days' free trial for use.
It was invented on June 28, 2017. Here it is necessary to create account. It doesn't detect
the disease name but it upload the image on your account and the other farmer/user can
help you in searching your solution or can give you the solution on the image comment
box which is not good for up loader.[46] If nobody answered so you will not be able to get
to find about the disease in your plant.
Image processing is now rapidly growing faster as compare to any other fields. This field is
really required more and more effective work, deep learning is too much vast field in
computer science. Image processing is mainly used in colors changing in picture,
background changing in images etc. We have used k-mean clustering in the pre-processing
phase, basically it will cluster the images into dots. We use the following classification and
disease technique and algorithms in our application:
The classification and detection techniques that can be used for plant leaf disease
classification. Image is captured and then it is realized to match the size of the image to be
stored in the database. At that point the picture is improved in quality and commotions are
evacuated. Here pre-process is done before feature extraction.
Page 20 of 101
2.4.2 RGB Image Conversion:
To extract the vein image from each leaf, RGB photos are first transformed to white, and then
to greyscale. Next, fundamental morphological operations are performed on the picture. After
then, the picture is changed to a binary image. The binary pixel value is then translated to the
equivalent RGB picture value if it is 0 or 10. Ultimately, the sickness is detected utilizing
person correlation, the dominating feature set, and a neural network.
Since the images were taken in the actual field, there may be water stains, spores, and residue
as commotion. Information pre-processing serves to remove noise from images and change
the values of individual pixels. It improves the image's quality. In order to readily diagnose
disease, the k-mean is utilized in image pre-processing to turn the images into dots. We're
going to do the following.
Color space conversion
Filtering
Smoothing
The process of dividing an advanced image into distinct pieces is known as picture division
(sets of pixels, otherwise called picture objects). Division's underlying goal is to reorganize a
picture's representation into something more meaningful and easily identifiable. Therefore, the
leaf image will be divided into several sections to make it simpler to identify the location of
the primary problem. The following characteristics are necessary.
Edge based
Region based
Clustering based
Highlight extraction is the significant part to nimbly anticipate the contaminated locale.
Here shape and textural feature extraction is done. The shape situated component extraction
like Area, Color hub length, whimsy, robustness and border are determined. So also the
Page 21 of 101
surface arranged component extraction like difference, connection, vitality, homogeneity
and mean. It will focus on
• Color
• Shape
• Texture
Research by Smith et al. (2020) explores a comprehensive plant care system similar to our
project, integrating identification, disease detection, and care recommendations. The system
utilizes a combination of handcrafted features and CNNs for identification and employs
clustering algorithms for disease categorization. Understanding the technical interface, their
work emphasizes seamless user interaction and efficient information retrieval.
While existing applications show that there is a noticeable gap in achieving a holistic
solution that seamlessly integrates plant identification, disease detection, and preventive
measures. Many applications lack accurate disease detection capabilities, and there is a need
for more detailed information on managing and preventing plant diseases. The identified
gaps form the basis for our project's objectives and prevent the disease. [6]
Page 22 of 101
2.8 ALGORITHMS:
CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different species or disease categories
These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant
identification or disease detection.
Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.
Page 23 of 101
2.8.4 Data Preprocessing:
Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the
models.
Image Capture and Upload: Users can capture images of plants using their smartphones or
digital cameras and upload them to the software.
Real-Time Analysis: The software processes images in real-time, providing immediate feedback
on the health status of the plant.
Preprocessing: The software automatically performs noise reduction, normalization, and cropping
to ensure high-quality images are analyzed.
Segmentation: It isolates the plant from the background and focuses on regions of interest, such as
leaves and fruits, where disease symptoms are most likely to appear.
Texture Analysis: Identifies texture variations that are symptomatic of different plant diseases.
Model Training: Utilizes large datasets of plant images to train machine learning models,
particularly Convolutional Neural Networks (CNNs), for high accuracy in disease detection.
Page 24 of 101
Continuous Learning: The system can be updated with new data to improve its accuracy and
adapt to new disease variants.
Disease Identification: Provides detailed information on the detected disease, including symptoms
and potential causes.
Treatment Suggestions: Offers recommended treatments and management practices based on the
diagnosed disease.
Mobile Application: A dedicated app for smartphones and tablets that allows users to capture
images, upload them, and receive diagnostic results on the go.
Web Portal: A responsive web-based interface accessible via browsers, providing similar
functionalities as the mobile app.
RESTful API: Allows integration with other agricultural management systems and third-party
applications. We used axios library in frontend which provides a simple and intuitive API for
making HTTP requests. Developers can specify request parameters such as URL, method,
headers, data, and query parameters using a clean and expressive syntax. It is also used for
connecting backend and frontend through API. This API enables external systems to send images
for analysis and retrieve diagnostic results.[48]
Backend Server: Handles image preprocessing, segmentation, feature extraction, and disease
detection. This server is optimized for performance to ensure real-time analysis.
Machine Learning Models: Hosted on the backend server, these models process the features
extracted from the images to detect and classify diseases.
Page 25 of 101
By combining these features and technical interfaces, the plant disease detection software aims to
provide a comprehensive, efficient, and user-friendly solution for modern agricultural practices.
2.11 CONCLUSION:
These kinds of applications are employed in many different contexts to improve current efficiency
and protect plants from illness. These apps help you become more efficient or modern. These
apps aid in lowering the number of bacterial illnesses that affect plants. It is not just for those who
own farms; it is also for those who enjoy gardening but are unsure of how to protect their plants
from illness. In order to improve our plant disease detection application, we have compared
several application aspects with one another. Since the earlier application had certain problems,
more recent ones were created, and in the majority of them, the data was already kept in the
database it only shows the information and some applications have detection option.[7]
INTRODUCTION
For any application, the right hardware and software are necessary and crucial components. We
are unable to create any kind of program without these two elements. This project primarily uses
a scalable, straightforward image processing technique. It can view the image from any angle,
including up, down, left, right, behind, and in front. The dataset performs well in image
segmentation (splitting a digital image into several segments) and image processing. Since Python
contains the necessary libraries, it is the programming language that we are targeting with our
tools.[8]
HARDWARE:
The software application can be developed to run on various hardware devices such as desktop
computers, laptops, smartphones, and tablets with standard camera capabilities.
Page 26 of 101
3.3 SOFTWARE:
In plant disease detection web application, which will be developed using Python language which
contains many useful tools Jupiter notebook for back-end programming and html CSS for web
front-end programming. It consist:
The front end is developed using a react. The choice of these technologies ensures a responsive
and visually appealing user interface.
The back end is implemented in Python, leveraging the django framework. Python's versatility
and the simplicity of django align with the project's development goals.
TensorFlow and Keras are employed for developing and deploying machine learning models,
especially CNNs, for plant identification and disease detection.
3.3.4 Database:
SQL Server Mangement Studio is used for the database, providing a lightweight and easily
deployable solution for storing plant and disease information.
3.4 LIBRARIES:
3.4.1 OpenCV:
OpenCV (Open Source Computer Vision Library) provides a wide range of functions and
algorithms for image and video processing, including image recognition and feature extraction
Page 27 of 101
3.4.2 TensorFlow:
TensorFlow is an open-source machine learning framework that can be used for training and
deploying deep learning models, such as Convolutional Neural Networks (CNNs), which are
essential for image classification tasks.
3.4.3 Keras:
Keras is a high-level neural networks API that can serve as an abstraction layer on top of
TensorFlow. It provides a user-friendly interface for building and training deep learning models.
[49]
3.4.4 Scikit-learn:
Scikit-learn is a popular machine learning library in Python that offers a wide range of
algorithms and tools for classification tasks.
3.5 ALGORITHMS:
CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different species or disease categories. [50]
These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant identification or
disease detection.
Page 28 of 101
3.5.3 Image Segmentation:
Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.
Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the models.
3.6 REQUIREMENTS:
As a user, I want to capture images of plants, so that I can identify the plant disease
accurately.
As a user, I want an intuitive interface, so that I can easily navigate through the application,
view identification results.
As a user, I want to receive information about the plant disease, so that I can understand its
condition.
As a user, I expect the application to provide guidance for the identified plant disease so that I
can take proper care of plant.
As a user, I want to detect common diseases in plants based on visual symptoms so that I can
implement timely.
3.7 CONCLUSION:
In conclusion, by fusing scalable image processing methods with an intuitive user interface, this
plant disease detection project aims to provide a complete solution. Using HTML, CSS,
JavaScript, and Python for development, the application makes sure it works with a range of
hardware. Precise plant identification and disease detection are made possible by the application
of TensorFlow and Keras to machine learning, specifically Convolutional Neural Networks
(CNNs). The project encompasses a wide range of functionality using OpenCV for image and
video processing, SQLite for lightweight database administration, and scikit-learn for further
machine learning tools. The application's analytical skills are improved with the addition of
algorithms for feature extraction, data preprocessing, and image segmentation. The user
Page 29 of 101
requirements place a strong emphasis on necessity of precise disease detection, user-friendly
navigation, comprehensive disease information, and instructions for plant care. Overall, this
research prioritizes user experience and practicality in plant disease management in addition to
addressing the technical aspects of image processing and machine learning.[10]
INTRODUCTION
In software design, developers translate the functional requirements into a structured plan,
determining the architecture, components, and interactions within the system. Modeling involves
creating visual representations, such as UML diagrams, to depict the system's structure, behavior,
and interactions, aiding both developers and stakeholders in understanding and communicating
complex software designs.[51] Effective software design and modeling not only contribute to the
clarity of system architecture but also facilitate collaboration among development teams,
streamline the implementation process, and ultimately result in the delivery of high-quality
software that aligns with user expectations and business needs.[11]
Page 30 of 101
USECASE DIAGRAM:
In use case diagram it itself may enter a great deal of insight concerning each chance, use-case
case diagram can help give a larger amount perspective of the system. They give the streamlined
and graphical representation of what the system should really do.
4.2.1 Description:
Actor: User
Use Cases:
Upload Image:
Permits the user to view identification information about plants based on the uploaded images.
Provides the user with information on disease detection results related to the uploaded images.
It is a behavioral diagram and advance form of flow char that modeling the flow form one activity
to another activity.
Page 32 of 101
4.3.1 Activity diagram
4.3.1 Description:
The diagram illustrates the system's sequential process. Upon user image input, the system
validates the format. Valid images undergo feature extraction for disease detection. If there is a
disease, the system presents a thorough report with information, precautions, and necessary care,
along with an option to generate a report. If there is no disease, a status message stating that no
disease is detected.
System diagrams typically show the components of a process, input, output including the
hardware, software, databases, and people involved, as well as the communication pathways
between them.
4.4.1 Description:
The above diagram illustrates the sequential flow of the system. At first, the user inputs an image,
initiating the image pre-processing phase. In the process, relevant leaf features are extracted.
These extracted features are then compared with the dataset. The system uses this comparison to
classify the image and predict which disease class the data provided belongs to.
Page 33 of 101
4.5 CLASS DIAGRAM:
Class UML diagram is the foremost generally recognized graph sort for programming
documentation. Since most computer program being made is still based on the OOP worldview,
utilizing course graphs to record the software.
4.5.1 Description:
The provided diagram contains the classes that covers the whole project's work. The classes are:
User: It represents the end-user interacting with the system, responsible for uploading, browse
images, predict image as well as generating and viewing reports.
Image Processor: It is responsible for handling the important preprocessing operations, such as
feature extraction, segmentation, and disease classification, then return if there is a disease present
with disease name.
Page 34 of 101
Report Generator: It manages the creation of disease report, also includes an option to save the
generated report, enhancing the system's functionality.
Disease: It provides necessary detail of various diseases, including their names, symptoms, and
care instructions. It ensures the system's comprehensive understanding of disease-related
information.
Contact us: It provides user name email and if any feedback they want to give.
Page 35 of 101
4.6.1 Entity Relation diagram
4.6.1 Description:
The provided ERD diagram outlines the entities like Image, Report, Plant Type and Disease Info.
Users can upload multiple images, each associated with one thorough report providing
information about the detected disease, precautions, and necessary care. User-Image (one-to-
many), Image-Report (one-to-one), Image-Disease Info (oneto-many), and Disease Info-Report
(one-to-one) are the relationships that are involved. Validity checks assure that the format of
uploaded photos is correct. User ID, Image ID, Report ID, Disease ID, and timestamps are
examples of important properties. The data connected and structure of the plant disease detection
system are briefly illustrated in this ERD.
Page 36 of 101
4.7 SEQUENCE DIAGRAM:
In this figure the image is selected by browsing image. Then the image is sent to the server. The
processing process of the image is started as the features are extracted and then the segmentation
of image is done. The result is predicted and shown on the main screen in the form of disease
name with its cure.
4.7.1 DESCRIPTION:
Page 37 of 101
The user selects an image containing relevant data, possibly related to a medical condition.
Browse Image
The user uploads the selected image to the system, triggering the process of analysis and
prediction.
Feature Extraction
As the image is received, the Prediction life line kicks in, performing feature extraction on the
uploaded image to identify key characteristics.
Segmentation:
The system then proceeds with segmentation, isolating specific regions of interest within the
image for more precise analysis.
Classify Disease:
The prediction engine classifies the features, determining the potential disease or condition. The
identified disease is then presented to the user for review.
Give Details:
The analysis results, including the identified disease and suggested cures, are presented in report
to the user for their understanding and consideration.
Save Report:
Page 38 of 101
4.8 OBJECT DIAGRAM:
An object diagram is a graph of occurrences, including objects and data values. An item chart an
occurrence of a class graph.
4.8.1 Description:
An object diagram represents classes' instances and their relationships at a specific point in time.
In the context of our plant disease detection system, the provided object diagram portrays the state
that the system is in when a user uploads an image and the system is processing it. Within this
scenario, several key objects are identified as follows:
1. User: This instance represents the user who uploaded the image.
3. Image Processor: This represents the instance responsible for processing the uploaded image.
4. Disease Information: This represents the details regarding the detected disease, known as a
crucial component of the processed information.
Page 39 of 101
5. Report Generator: This represents the instance responsible for generating a comprehensive
disease report based on the processed image and associated data.
4.9.1 Description:
Device: Acts as the medium through which the user interacts with the system, usually a personal
computer or a mobile device.
Page 40 of 101
Select Image: Functionality that lets the user choose an image for analysis.
Browse Image: Allows the user to navigate through images stored within the system.
Plant Disease Detection: The central system responsible for processing and analyzing the images.
Segmentation: Similar to the first diagram, it processes the image by segmenting it to facilitate
detailed analysis.
Feature Extraction: Extracts crucial features from the image necessary for disease detection.
Prediction: Analyzes the segmented and feature-extracted data to predict the disease.
Web server: Facilitates the exchange of data between the client-side and the server-side.
Page 41 of 101
4.10 Deployment Diagram:
4.10.1 Description:
Device: Serves as the central interface for input and output operations within the application
server.
View Application: A functionality that allows the user to interact with the application’s interface.
Browse Image: Allows the user to browse and select images within the application.
Page 42 of 101
Feature Extraction: A process where key features from the selected images are extracted for
analysis.
Segmentation: This process involves dividing the image into parts or segments to simplify or
change the representation of the image into something more meaningful and easier to analyze.
Prediction: Based on the extracted features and image segmentation, the application predicts the
disease.
View Result: Enables the user to view the outcomes of the analysis.
Webserver: Handles HTTP requests from the application server, facilitating communication
between the server and client-side.
Page 43 of 101
4.11.1 Description
An operational diagram of plant disease detection software in which user firstly connected to a
desktop then user send http request to a internet then this request from internet further send to a
webserver then from web server this send to a html web resource then is return html web resource
display to web server and in response webserver send to desktop then desktop again send http
protocol to web server then this web server send response send to desktop then from desktop
search image send to webserver then webserver further send image acces to file server which
returns display to the desktop.[53]
Page 44 of 101
ALGORITHM ANALYSIS AND COMPLEXITY
INTRODUCTION
We are discussing about algorithms which are used in our project plant disease detection
application which is an image processing projects. Also talking about their complexity, we are
also discussing bout different type of algorithm which are used in image processing projects.
Image recognition is the task of taking an input image and outputting a class that describes the
best of image. Convolutional Neural Networks (CNN) is the most common approach that has
been used widely for the problems that are used to resolve visual perception and also it seem to
outperform all other techniques.[13]. The Convolutional Neural Networks (CNN) is categorized
by a superior architecture composed of alternating convolutional and pooling layers optionally
monitored by fully connected layers. This has been stated as successful at extracting and
combining the features from an image that been input. Here all the input layers are fully
connected with the hidden layers
ALGORITHM:
Page 45 of 101
7) Divide the data in training data, testing data, validating data
9) Select the features of plants which is used for prediction 10) Try:
Set plant image as cv2.cvtColor and change in gray
Start model prediction of plant disease and store in result If percentage > 50 return pred disease
name with its prevention Else
PSEUDO CODE:
Step 1
Step 2
Step 3
Step 4
Step 5
Page 46 of 101
5.5 COMPLEXITIES OF THE ALGORITHM WORST CASE
For each and every data point in each batch during each epoch, for instance, the worst-case
scenario would involve multiple layers going through convolution, activation, pooling, and full
connectivity. Therefore, the sum of these complexities can be used to express the worst-case time
complexity.[54] Thus, if the network has L layers and the B batches have E epochs. The
following will be the worst-case time complexity:
= O (L*E*B*W*H*D*F*F*C)
Where,
C =Number of filters
=O (W*H*D*F*F*C)
Where,
C = Number of filters
Page 47 of 101
5.7 COMPARISON OF ALGORITHMS:
CNN is made for spatial data RNN is made for temproal data. It as a form of unsupervised
such as images. learning.
CNN works on fixed input size RNN works on arbitrary input / GAN is to generate a complex
andr generate fixed output. output lengths. output for a simple input.
In image processing and video In text and speech analysis RNN In text to image, image to
processing CNN is used. isused. translation GAN is used.
5.8 ALGORITHMS:
CNNs are commonly used for image classification tasks. They can learn to extract relevant
features from plant images and classify them into different disease categories.
These algorithms can be employed to extract meaningful features from images, such as color,
texture, shape, or leaf venation patterns. These features can then be used for plant identification or
disease detection.
Page 48 of 101
5.8.3 Image Segmentation:
Image segmentation algorithms can be used to separate plant regions from the background or
isolate specific parts of the plant for more accurate analysis.
Data preprocessing techniques, such as resizing images, normalizing pixel values, and
augmenting the dataset, can be applied to improve the performance and robustness of the models.
5.9 CONCLUSION:
In conclusion, the plant disease detection application employs Convolutional Neural Networks
(CNNs) as a primary algorithm for image classification, allowing the model to effectively extract
features from plant images and categorize them based on disease. The algorithmic approach
involves initializing weights, training the model, and utilizing a step-by-step process for image
prediction and disease identification. The pseudo code outlines the key steps, including data
division, layer mapping, and result classification. A comparison between CNN, Recurrent Neural
Network (RNN), and Generative Adversarial Network (GAN) emphasizes the specialization of
each algorithm in spatial data, temporal data, and unsupervised learning, respectively. The project
also incorporates feature extraction algorithms, image segmentation, and data preprocessing
techniques to enhance the accuracy and robustness of disease detection. Overall, the combination
of these algorithms provides a comprehensive and effective solution for plant disease
identification in image processing projects. [14]
Page 49 of 101
IMPLEMENTATION
Testing.jsx:
if (!selectedFile) {
return;
formData.append('image', selectedFile);
Page 50 of 101
try {
console.log(response.data);
setResponseData(response.data);
} catch (error) {
};
setSelectedFile(file);
} else {
};
Page 51 of 101
return (
<div>
{responseData && (
<div>
<h2>Results</h2>
<div>
<h4>Initial Image:</h4>
<img src={`data:image/jpeg;base64,${responseData.initial_image_uri}`}
alt="Initial" />
<p>{responseData.initial_message}</p>
</div>
<div>
<h4>Masked Image:</h4>
<img src={`data:image/jpeg;base64,${responseData.masked_image_uri}`}
alt="Masked" />
</div>
<div>
<h4>Segmented Image:</h4>
Page 52 of 101
<img src={`data:image/jpeg;base64,$
{responseData.segmented_image_uri}`} alt="Segmented" />
</div>
<div>
<img src={`data:image/jpeg;base64,$
{responseData.damage_analysis_uri}`} alt="Damage Analysis" />
</div>
<div>
<h4>Recommendation:</h4>
<p>{responseData.recommendation}</p>
</div>
</div>
)}
</div>
);
};
Page 53 of 101
6.2 Backend:
Views.py:
@csrf_exempt
def process_image_api(request):
image_file = request.FILES['image']
image_stream.seek(0)
masked_image_uri = apply_mask(image_stream)
image_stream.seek(0)
segmented_image_uri = process_segmentation(image_stream)
image_stream.seek(0)
Page 54 of 101
recommendation = get_cure_recommendation(damage_classification, plant_id=1)
response_data = {
'initial_message': initial_results['result_text'],
'initial_image_uri': initial_results['image_uri'],
'masked_image_uri': masked_image_uri,
'segmented_image_uri': segmented_image_uri,
'damage_analysis_uri': damage_analysis_uri,
'total_percentage': total_percentage,
'damage_classification': damage_classification,
'recommendation': recommendation
return JsonResponse(response_data)
Utils.py:
if not plant_info:
Page 55 of 101
return "No data available for this plant."
Data.py:
plant_disease_data = [
"Intensity": {
},
"id": 1
},
Page 56 of 101
"Plant Name": "Tomato",
"Intensity": {
"high": "Cut off the plant below the bag and allow bag with plant...",
},
"id": 2
},
"Intensity": {
},
Page 57 of 101
"id": 3
},
"Cure": "Use foliar fungicides. Apply Septum (a combination of phenol molecules, saponins,
flavonoids and silicic acid obtained from Equisetum arvense extract)...",
"Intensity": {
},
"id": 4
},
"Cure": "Protective spraying with mancozeb or zineb 0.2 % should be done to prevent
infection of tubers...",
"Intensity": {
Page 58 of 101
"low": "Destruction of the foliage few days before harvest is beneficial...",
"medium": "The resistant varieties recommended are Kufri Naveen, Kufri Jeevan...",
},
"id": 5
Analysis.py:
import numpy as np
import cv2
import base64
Page 59 of 101
def classify_damage(percentage):
else:
return segmented_image
def process_damage_analysis(segmented_image_stream):
Page 60 of 101
# Analysis
_, dark_brown_percentage, mask_dark_brown =
calculate_percentage_of_color(segmented_image, lower_bound_dark_brown,
upper_bound_dark_brown)
segmented_with_contours = draw_contours_around_damaged_area(segmented_image.copy(),
combined_mask)
segmented_with_contours_rgb = cv2.cvtColor(segmented_with_contours,
cv2.COLOR_BGR2RGB)
plt.figure(figsize=(5, 5))
plt.imshow(segmented_with_contours_rgb)
plt.axis('off')
buf = BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
segmented_with_contours_uri = base64.b64encode(buf.read()).decode('utf-8')
plt.close()
damage_classification = classify_damage(total_percentage)
Page 61 of 101
Segmentation.py:
import cv2
import numpy as np
import base64
def convert_to_hsv(image):
return combined_mask
def process_segmentation(image_stream):
hsv_image = convert_to_hsv(original_image)
Page 62 of 101
lower_bound_dark_brown = np.array([10, 100, 20])
plt.figure(figsize=(5, 5))
plt.imshow(segmented_image_rgb)
plt.title("Segmented Image")
plt.axis('off')
buf = BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
segmented_image_uri = base64.b64encode(buf.read()).decode('utf-8')
plt.close()
return segmented_image_uri
Models.py:
Page 63 of 101
from django.db import models
Model_processing.py:
import numpy as np
import base64
import os
def process_image(image_file):
image_stream = BytesIO(image_file.read())
image_stream.seek(0)
image = PilImage.open(image_stream)
img_array = img_to_array(img_resized)
prediction = model.predict(img_array)
Page 64 of 101
confidence = np.max(prediction)
'Tomato healthy', 'Tomato Yellow Leaf Curl Virus', 'Tomato septoria leaf spot']
predicted_class = class_names[np.argmax(prediction)]
image_stream.seek(0)
Masking.py:
import cv2
import numpy as np
import base64
def apply_mask(image_stream):
Page 65 of 101
image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)
fixed_lower_s = 41
fixed_upper_h = 83
plt.figure(figsize=(5, 5))
plt.imshow(result_rgb)
plt.title('Masked Image')
plt.axis('off')
buf = BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
image_uri = base64.b64encode(buf.read()).decode('utf-8')
plt.close()
return image_uri
Page 66 of 101
Settings.py:
"""
https://docs.djangoproject.com/en/5.0/topics/settings/
https://docs.djangoproject.com/en/5.0/ref/settings/
"""
BASE_DIR = Path(__file__).resolve().parent.parent
# See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/
SECRET_KEY = 'django-insecure-esuwla17au+*yh_&3w&bi%ie=wr=26r4-s241tt-hoj$pr-ows'
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
Page 67 of 101
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'home',
'ml_app',
'ml_tests',
'step_by_step_processing',
'rest_framework',
'corsheaders',
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
Page 68 of 101
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
ROOT_URLCONF = 'fyprumaisa.urls'
CORS_ALLOW_ALL_ORIGINS = True
TEMPLATES = [
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
WSGI_APPLICATION = 'fyprumaisa.wsgi.application'
Page 69 of 101
# Database
# https://docs.djangoproject.com/en/5.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# Password validation
# https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
Page 70 of 101
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
# Internationalization
# https://docs.djangoproject.com/en/5.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# https://docs.djangoproject.com/en/5.0/howto/static-files/
STATIC_URL = 'static/'
# https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
Page 71 of 101
CHAPTER VII
Testing
7.1 INTRODUCTION:
Testing is a crucial stage in the creation of software for plant disease detection since it guarantees
accurate, effective, and dependable system operation in a range of scenarios. It consists of a
number of methodical tasks intended to assess the functionality, correctness, usability, and
Page 72 of 101
resilience of the software[15]. Through thorough testing, developers can find and fix bugs,
confirm that the program satisfies requirements, and confirm that it works as intended in
practical situations.[16]
To guarantee that the program accurately and precisely detects and diagnoses plant diseases.
To assess how well the machine learning models in particular, the Convolutional Neural
Networks (CNNs) perform in identifying patterns associated with disease in pictures.
To evaluate the software's processing speed and response time to make sure it can effectively
perform real-time analysis.
To evaluate the system's scalability in handling big datasets and high-resolution photos.
To confirm that farmers and agronomists can simply navigate and utilize the program due to
its intuitive and user-friendly user interface.
In order to guarantee that the program offers concise and actionable feedback to to confirm
that farmers and agronomists can simply navigate and utilize the program due to its intuitive
and user-friendly user interface.
To confirm that farmers and agronomists can simply navigate and utilize the program due to
its intuitive and user-friendly user interface.[55]
To guarantee that the program offers concise and actionable feedback to users.
To guarantee that the program can function dependably in a variety of settings, such as
altered lighting or image quality.
To evaluate the software's resistance to foreseeable problems like corrupted photos or
inadequate data.[17]
1. Functional Testing
2. Non Functional Testing
Software testing that concentrates on confirming that the program operates in accordance with
the given criteria is known as functional testing. It guarantees that every feature of the software
program performs in accordance with the requirements. User interface, APIs, databases, security,
client/server apps, and software functionality are all checked during this kind of testing.[18]
Validating the software system against the functional requirements and specifications is the aim.
Page 73 of 101
7.4.1 TYPES AND TECHNIQUES OF FUNCTIONAL TESTING:
1. Unit Testing: Unit testing is a software testing methodology that involves testing individual
software units or components separately. These units usually correspond to the smallest
software components that can be tested, like modules, functions, or methods.
2. Acceptance Testing: User acceptability testing (UAT), referred to as acceptance testing, is
an essential stage of software development when the program is assessed to make sure it
satisfies end users' or stakeholders' needs and expectations. In contrast to unit testing, which
concentrates on testing specific code units, acceptance testing assesses whether the system as
a whole complies with the requirements and is ready for deployment.[19]
3. Integration Testing: Software modules or components are merged and evaluated as a group
to make sure they function as intended as part of the integration testing technique. This kind
of testing focuses on confirming how various software components interact with one another
and looking for any flaws or problems that may occur during integration.
4. System Testing: System testing is a thorough stage in the software development life cycle
when the integrated software system as a whole is assessed to make sure it satisfies
requirements and operates as intended in its intended setting.[20] Instead than concentrating
on specific modules or units, this kind of testing is carried out on the system as a whole,
including all of its parts and subsystems.
5. Black Box Testing: Testing without knowledge of the internal workings of the application.
Focuses on input and output.
6. White Box Testing: Testing with knowledge of the internal workings of the application.
Focuses on code structure.
7. Regression Testing: Software testing techniques like regression testing are used to
determine whether recent code changes have had an impact on the program's current
functionality. Throughout the software development lifecycle, regression testing plays a
crucial role in preserving the software's stability and integrity.[56]
Aspects of the software including performance, scalability, and usability that might not be
connected to a particular function or user action are referred to as non-functional testing. Instead
than concentrating on particular actions, this kind of testing focuses on how the system functions.
[21]
Page 74 of 101
7.5.1 TYPES AND TECHNIQUES OF NON FUNCTIONAL TESTING:
i. Performance Testing
ii. Load Testing
iii. Stress Testing
iv. Usability Testing
v. Security Testing
vi. Scalability Testing
vii. Benchmark Testing
viii. Reliability Testing
Testing without knowledge of the internal workings of the application. Focuses on input and
output.[22]
Priority High
Page 75 of 101
Expected Result 1. The logo should be prominently displayed
and easily recognizable.
2. The logo should accurately represent the
theme of the application.
3. The logo should be clear and not distorted
or pixelated.
4. The logo should be aligned properly and
positioned in the designated area.
5. The logo should have appropriate
dimensions, maintaining a balanced
appearance.
6. The logo should have good color contrast
with the background.
7. The logo should not contain any spelling
mistakes or visual inconsistencies.
8. Clicking on the logo should
navigate the user to the homepage or
perform a relevant action.
Status Pass
TABLE 7.1 Appearance Of The Logo
Testing with knowledge of the internal workings of the application. Focuses on code structure.
b) Diagnose Button:
Priority High
Status Pass
TABLE 7.2 Diagnose Button
c) Upload Button
Priority High
Status Pass
TABLE 7.3 Upload Button
Priority High
Page 78 of 101
Test Steps 1. Navigate to the section requiring data retrieval from stored
data.
2. Initiate the action to retrieve specific data (e.g., accessing user
profile, querying disease information).
3. Verify that the system sends a request to the database for data
retrieval.
4. Check that the retrieved data matches the expected data
stored in the database.
Expected Result 1. The system successfully sends a request to the database without
errors.
2. The retrieved data matches the expected data stored in the
database.
3. The retrieved data is accurate, complete, and consistent
with the data stored in the database.
Actual Result The system successfully retrieves correct data from the
database without errors.
Pass/Fail Criteria - Pass: The system successfully retrieves correct data from the
database without errors.
- Fail: The system encounters errors during data retrieval, or
the retrieved data does not match the expected data stored
in the database.
Status Pass
TABLE 7.4 Correct Data Retrieval
Priority High
Page 79 of 101
Expected Result 1. The software successfully processes the
uploaded image without errors.
2. The identified plant species matches the
actual plant species depicted in the image.
3. The identified disease matches the
actual disease affecting the plant.
Status Pass
TABLE 7.5 Correct Disease and Plant Identification
User click on User should Click on view cure User will be Pass
view result be on the result button instructions. provided by
relevant page Will be cure
provided to instructions.
the user
TABLE 7.7 Functional Testing
Page 80 of 101
7.8.1 CONCLUSION OF FUNCTIONAL TESTING
In functional testing we have test all the core functionally of our web application. All different
test cases where tested to assess how well our web application is working. It confirms that key
features operate smoothly, providing a reliable and user-friendly experience. Users can
effectively upload and browse images, diagnose plant diseases, and view results with cure
instructions.[24] Each test case, including image uploads, disease detection, and result viewing,
passed successfully, demonstrating that the software’s GUI is responsive and the core
functionalities are robust. This ensures users can easily navigate the software, upload images,
receive accurate disease diagnoses, and obtain valuable management advice, thereby supporting
effective plant care and sustainable agricultural practices.[25]
In Error Handling testing, test are run to understand areas where potential errors can be hidden.
Result The software identifies the plant and give cure Fix
instructions as installed in the software.
TABLE 7.7 Error Handling Testing
In error handling testing, test where conducted to find out potential bug in the system. The first
interface was image. Next interface was prediction of the plant disease, where no bug was found,
as he system is predicting the disease accurately.The system is generating accurate result with
cure instructions.[26]
Page 81 of 101
7.10 REGRESSION TESTING
Regression testing of our web application is done each time we made changes to codebase. [27]
Our web application is tested many times.
REGRESSION TESTING
In regression testing, we tested our application multiple times using different inputs to check
whether they failed or passed. In the first test case we check whether an image is uploaded
Page 82 of 101
successfully or not. In the second test we check whether our system predicts accurately or not. In
the last two test we check whether there is any error generated by the quality of image uploaded
by the user.[27]
Integration testing is carried out by combining multiple components of a software as a group and
then testing them.[28] The goal of integration testing is to ensure that all components work
properly when integrated together.[58]
INTEGRATION TESTING
Page 83 of 101
7.11.1 CONCLUSION OF INTEGRATION TESTING
In integration testing, we tested how different components of our web application perform when
integrated together. The first test case, we tested that whether image input button works properly
as expected. The second test case, we tested whether our plant disease detection system work
properly or not. In third and last we tested if the view result button gives accurate instructions
and logo is linked with the homepage.[29]
Unit testing is used to test out single unit or component of software in isolation. The main reason
for performing unit test is to verify whether or not each unit or components works as expected.
UNIT TESTING
View Result Null Click the button. The disease will Pass
be detected
further more
user will be
provided by
cure instructions
and disease
sevearity
Page 84 of 101
7.12.1 CONCLUSION OF UNIT TESTING
DECISION TESTING
Page 85 of 101
valid type. further next
steps.
TABLE 7.10 Unit Testing
Page 86 of 101
7.14.3 RESULT PAGE (REPORT GENERATE)
Page 87 of 101
CHAPTER VIII
APPENDICES
Appendices A:
To properly capture plant disease detection, enhance, and analyze photos and identify patterns or
anomalies suggestive of plant diseases, image processing is an essential component. This
program may identify illness patterns by learning from labeled datasets when it is integrated with
machine learning, especially deep learning techniques. This allows for reliable disease
categorization in new photos.[31] Image processing-enabled continuous monitoring makes it
possible to identify diseases early on, which enables prompt interventions to stop their spread
and enhances crop management techniques. Additionally, the software's capacity to manage
intricate and massive agricultural datasets is improved by the combination of image processing
and machine learning capabilities, offering farmers and agronomists insightful information on
how to maximize crop health and productivity.[32]
Appendices B:
Page 88 of 101
The main goal of the project is to create a complete software program that targets farmers,
gardeners, and plant enthusiasts and smoothly combines plant identification, disease detection,
and preventive measures. Its main goal is to enable users to recognize plants correctly and take
good care of them. By using cutting-edge picture recognition algorithms and disease detection
methods, the program aims to support sustainable agricultural methods. In order to decrease crop
losses and lessen the need for chemical interventions, it intends to enable early disease diagnosis
and provide timely information for preventive measures, thereby supporting more sustainable
agricultural practices overall. [33]
Appendices C:
The confluence of plant identification and disease detection has been addressed in a number of
applications, providing important new information in this field. Particularly, Plant Vision is
unique as a smartphone application that combines image analysis for disease diagnosis with deep
learning for plant species recognition. Its primary features include the ability to identify plants in
real time and diagnose diseases, which makes it a useful model for our study.[34] Both desktop
and mobile platforms have successfully implemented deployment tactics; the primary cost
factors are related to upgrades.
Appendices D:
Accurately identifying and diagnosing agricultural illnesses depends heavily on the incorporation
of sophisticated algorithms in plant disease detection software. Convolutional Neural Networks
(CNNs), Feature Extraction Algorithms, Image Segmentation procedures, and Data
Preprocessing methods are notable among the various algorithms employed for their noteworthy
contributions. [35]Together, these methods allow the program to divide pertinent regions, extract
useful features from plant photos, and preprocess data for improved model performance. Let's
examine each of these algorithmic elements in more detail:[60]
Appendices E:
The right hardware and software selection is essential to the creation of any application. The
scalable and user-friendly image processing method used in this project allows for the thorough
Page 89 of 101
viewing of photos from several angles. The dataset performs well in tasks involving image
processing and segmentation. Python is the preferred programming language because of its
large library, which makes it easier to implement the necessary capabilities in an effective
manner.[36]
Appendices F:
Description:
The User Interface allows users to upload images and view results.
The Feature Extraction Module extracts relevant features from the images.
Appendices G:
Steps:
Appendices H:
Page 90 of 101
Classes: User, Image, Plant, Disease, FeatureExtractor, Classifier, Database, Notification.
Relationships:
Appendices I:
Instances:
User: user1
Image: PlantImage01
Plant: Tomato
Disease: Blight
FeatureExtractor: Extractor01
Classifier: Classifier01
Database: PlantDiseaseDB
Notification: Notification01
Appendices J:
2. Use Cases:
Page 91 of 101
Upload Image
Manage Database
Receive Notifications
3. Relationships:
Both User and Admin receive Notifications and view Plant Care Instructions.
Appendices K:
Activities:
Start
Capture/Upload Image
Pre-process Image
Extract Features
Classify Disease
Display Results
Send Notifications
End
1. Workflow Overview
The software follows a sequential process from image input to disease detection and
recommendation.
Page 92 of 101
Steps:
Description: This workflow ensures a systematic analysis of plant images and provides users
with actionable insights to address detected diseases.
Texture Analysis: Examines texture patterns using Gray Level Co-occurrence Matrix
(GLCM) or similar techniques.
Convolutional Neural Networks (CNNs): Deep learning models trained to classify diseases
based on extracted features.
Support Vector Machines (SVMs): Machine learning models for disease classification.
1. Trigger Events
Page 93 of 101
Disease Detection: Notify users upon successful detection of diseases in uploaded images.
Database Updates: Notify users about new plant or disease data added to the database.
Page 94 of 101
CHAPTER IX
ACHIEVEMENTS
Page 95 of 101
CHAPTER X
FUTURE ENHANCEMENT
Due to the feasibility and the potential of the Plant Disease Detection Software, several
advancements can be made in order to modify the software and maximize the performance.
Another improvement would be to expand the list of diseases that the software is able to identify
from eight original diseases including Septoria Leaf Spot, yellow leaf curl virus, Black Maeasles,
Isariopsis Leaf Spot, Apple Scab, Cedar Apple Rust, Early Blight, Late Blight . To achieve this
expansion, it would be necessary to feed our AI models with more extensive and richer datasets
that would allow for a better and more extensive coverage of plant diseases. Also, we can
improve general user access and involvement by creating a multi-user website for users that can
create their own accounts to save their history. The above-mentioned approach would have been
able to allow for better management of the disease, and progress as well. Moreover, it would be
beneficial to create native applications for iOS and Android platforms, which would allow users
to experience it easily and naturally. These apps could take advantage of advanced features such
as high-definition cameras and touch screens, which would assist in capturing images of plant
leaves more accurately and improving the interaction of users with the apps. However, since user
data is sensitive information, it is essential to employ enhanced security features like the end-to-
end encryption and multi-factor authentication. Also, to add a certain dynamic to the application,
features of user competitions could be incorporated to promote who among them could best
handle the management and wellbeing of plants. All these improvements put together will help
make our software the best by providing users with efficient and sustainable means to practice
agriculture.
REFERENCES
[1] H. Sabrol and S. Kumar, "Recent studies of image and soft computing techniques for plant
disease recognition and classification," International Journal of Computer Applications, vol.
126, no. 1, 2015.
[2] A. Meunkaewjinda, P. Kumsawat, K. Attakitmongcol, and A. Srikaew, "Grape leaf disease
detection from color imagery using hybrid intelligent system," in 2008 5th international
Page 96 of 101
conference on electrical engineering/electronics, computer, telecommunications and information
technology, 2008, vol. 1: IEEE, pp. 513-516.
[3] Sankaran S, Mishra A, Ehsani R, Davis C. A review of advanced techniques for detecting
plant diseases. Computers and electronics in agriculture. 2010 Jun 1;72(1):1-3
[4] Nagaraju, Mamillapally, and Priyanka Chawla. "Systematic review of deep learning
techniques in plant disease detection." International journal of system assurance engineering and
management 11, no. 3 (2020): 547-560.
[5] Martinelli F, Scalenghe R, Davino S, Panno S, Scuderi G, Ruisi P, Villa P, Stroppiana D,
Boschetti M, Goulart LR, Davis CE. Advanced methods of plant disease detection. A review.
Agronomy for Sustainable Development. 2015 Jan;35:1-25.
[6] Shoaib, Muhammad, Babar Shah, Shaker Ei-Sappagh, Akhtar Ali, Asad Ullah, Fayadh
Alenezi, Tsanko Gechev, Tariq Hussain, and Farman Ali. "An advanced deep learning models-
based plant disease detection: A review of recent research." Frontiers in Plant Science 14 (2023):
1158933.
[7] Harakannanavar, Sunil S., et al. "Plant leaf disease detection using computer vision and
machine learning algorithms." Global Transitions Proceedings 3.1 (2022): 305-310.
[8] De Luna, R. G., Dadios, E. P., & Bandala, A. A. (2018, October). Automated image
capturing system for deep learning-based tomato plant leaf disease detection and recognition. In
TENCON 2018-2018 IEEE Region 10 Conference (pp. 1414-1419). IEEE.
[9] Golhani, K., Balasundram, S. K., Vadamalai, G., & Pradhan, B. (2018). A review of neural
networks in plant disease detection using hyperspectral data. Information Processing in
Agriculture, 5(3), 354-371.
[10] Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Ullah, A., Alenezi, F., Gechev, T., Hussain,
T. and Ali, F., 2023. An advanced deep learning models-based plant disease detection: A review
of recent research. Frontiers in Plant Science, 14, p.1158933.
[11] Moshou, Dimitrios, Cedric Bravo, Roberto Oberti, Jon West, Luigi Bodria, Alastair
McCartney, and Herman Ramon. "Plant disease detection based on data fusion of hyper-spectral
and multi-spectral fluorescence imaging using Kohonen maps." Real-Time Imaging 11, no. 2
(2005): 75-83.
[12] Devi, P. R. (2021, August). Leaf Disease Detection Using Deep Learning. In 2021 Second
International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp.
1797-1804). IEEE.
[13] Harakannanavar, Sunil S., et al. "Plant leaf disease detection using computer vision and
machine learning algorithms." Global Transitions Proceedings 3.1 (2022): 305-310.
[14] Tete, Trimi Neha, and Sushma Kamlu. "Plant Disease Detection Using Different
Algorithms." RICE. 2017.
[15] Jones, Lyle V., and Donald W. Fiske. "Models for testing the significance of combined
results." Psychological Bulletin 50.5 (1953): 375.
[16] Jones, Lyle V., and Donald W. Fiske. "Models for testing the significance of combined
results." Psychological Bulletin 50, no. 5 (1953): 375.
[17] Goetz, Christopher G., et al. "Movement Disorder Society‐sponsored revision of the Unified
Parkinson's Disease Rating Scale (MDS‐UPDRS): scale presentation and clinimetric testing
results." Movement disorders: official journal of the Movement Disorder Society 23.15 (2008):
2129-2170.
[18].Goetz, Christopher G., Barbara C. Tilley, Stephanie R. Shaftman, Glenn T. Stebbins,
Stanley Fahn, Pablo Martinez‐Martin, Werner Poewe et al. "Movement Disorder Society‐
sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS‐UPDRS): scale
presentation and clinimetric testing results." Movement disorders: official journal of the
Movement Disorder Society 23, no. 15 (2008): 2129-2170.
Page 97 of 101
[19].Hooda, Itti, and Rajender Singh Chhillar. "Software test process, testing types and
techniques." International Journal of Computer Applications 111, no. 13 (2015).
[20].Madsen, H. S. (1983). Techniques in Testing. Oxford University Press, 200 Madison Ave.,
New York, NY 10016 (ISBN-0-19-434132-1, $5.95)..
[21].Parshall, Cynthia G., Tim Davey, and Peter J. Pashley. "Innovative item types for
computerized testing." Computerized adaptive testing: Theory and practice (2000): 129-148.
[22].Duchastel, P. C. (1981). Retention of prose following testing with different types of tests.
Contemporary Educational Psychology, 6(3), 217-226.
[23].Constâncio, V., Nunes, S.P., Henrique, R. and Jerónimo, C., 2020. DNA methylation-based
testing in liquid biopsies as detection and prognostic biomarkers for the four major cancer types.
Cells, 9(3), p.624.
[24].Madsen, Harold S. Techniques in Testing. Oxford University Press, 200 Madison Ave.,
New York, NY 10016 (ISBN-0-19-434132-1, $5.95)., 1983.
[25].Sawant, Abhijit A., Pranit H. Bari, and P. M. Chawan. "Software testing techniques and
strategies." International Journal of Engineering Research and Applications (IJERA) 2, no. 3
(2012): 980-986.
[26].Bray, D.E. and McBride, D., 1992. Nondestructive testing techniques.
[27].Beizer, B., 1995. Black-box testing: techniques for functional testing of software and
systems. John Wiley & Sons, Inc..
[28].Nidhra, S. and Dondeti, J., 2012. Black box and white box testing techniques-a literature
review. International Journal of Embedded Systems and Applications (IJESA), 2(2), pp.29-50.
[29].Do H, Elbaum S, Rothermel G. Supporting controlled experimentation with testing
techniques: An infrastructure and its potential impact. Empirical Software Engineering. 2005
Oct;10:405-35.
[30].Balci, Osman. "Validation, verification, and testing techniques throughout the life cycle of a
simulation study." Annals of operations research 53 (1994): 121-173.
[31].Vishnoi, V. K., Kumar, K., & Kumar, B. (2021). Plant disease detection using
computational intelligence and image processing. Journal of Plant Diseases and Protection, 128,
19-53.
[32].Narayanasamy, P. Microbial Plant Pathogens-Detection and Disease Diagnosis:: Viral and
Viroid Pathogens, Vol. 3. Vol. 3. Springer Science & Business Media, 2010.
[33].Vishnoi, Vibhor Kumar, Krishan Kumar, and Brajesh Kumar. "A comprehensive study of
feature extraction techniques for plant leaf disease detection." Multimedia Tools and
Applications 81, no. 1 (2022): 367-419.
[34].Rizk H. Automated early plant disease detection and grading system: development and
implementation.
[35] Geetharamani, G. and Pandian, A., 2019. Identification of plant leaf diseases using a nine-
layer deep convolutional neural network. Computers & Electrical Engineering, 76, pp.323-338.
[36.]Barbedo JG, Koenigkan LV, Santos TT. Identifying multiple plant diseases using digital
image processing. Biosystems engineering. 2016 Jul 1;147:104-16.
[37.]Martinelli, Federico, Riccardo Scalenghe, Salvatore Davino, Stefano Panno, Giuseppe
Scuderi, Paolo Ruisi, Paolo Villa et al. "Advanced methods of plant disease detection. A review."
Agronomy for Sustainable Development 35 (2015): 1-25.
[38.]Bhise, N., S. Kathet, S. Jaiswar, and Amarja Adgaonkar. "Plant disease detection using
machine learning." International Research Journal of Engineering and Technology (IRJET) 7, no.
7 (2020): 2924-2929.
[39Review of Computer Vision Techniques for the Analysis of Plant Diseases," by N. Barbedo,
2016.
[40] Deep Learning-Based Image Analysis for Plant Disease Detection, by Y. Jiang et al., 2018.
Page 98 of 101
[41] Automatic Detection of Plant Diseases Using Machine Learning Techniques, by P. Dey et
al., 2019.
[42]A Survey on Deep Learning Techniques for Plant Disease Detection and Classification, by
M. Sharma et al., 2020.
[43] Computer Vision-Based Tomato Disease Detection Using Convolutional Neural Networks,
by S. H. Khan et al., 2017.
[44]"Mobile-Based Deep Learning Model for Real-Time Plant Disease Detection," by R. P.
Singh et al., 2021.
[45]"A Comparative Study of Machine Learning Algorithms for Plant Disease Identification," by
A. K. Mishra et al., 2018.
[46]"Real-Time Detection and Classification of Plant Diseases Using a Deep Learning
Framework," by J. Zhang et al., 2019.
[47]"Automated Detection of Tomato Diseases Using Image Processing and Machine Learning
Techniques," by A. A. Selvaraj et al., 2019.
[48]"Deep Learning Approaches for Plant Disease Detection and Classification: A Review," by
S. M. A. Karim et al., 2021.
[49]"Development of a Smartphone Application for Plant Disease Diagnosis," by H. N. Phoulady
et al., 2020.
[50]"A Review on Deep Learning Techniques for Plant Disease Detection and Diagnosis," by S.
S. Hossain et al., 2020.
[51]"Fruit Disease Detection Using Deep Learning: A Review," by A. Akhtar et al., 2021.
[52]"Real-Time Detection of Potato Diseases Using Image Processing and Machine Learning,"
by G. S. R. Naik et al., 2018.
[53]"A Comparative Study of Deep Learning Models for Tomato Disease Detection," by R. K.
Gupta et al., 2020.
[54]"Mobile-Based Plant Disease Detection Using Convolutional Neural Networks," by V. S.
Patel et al., 2021.
[55]"Transfer Learning-Based Approach for Early Detection of Wheat Diseases," by S. K. Das et
al., 2019.
[56]"A Survey on Computer Vision Techniques for Plant Disease Detection," by S. Deb et al.,
2021.
[57]"Integration of Machine Learning and IoT for Smart Agriculture: A Review," by N. K.
Gupta et al., 2020.
[58]"Remote Sensing Techniques for Early Detection of Crop Diseases: A Review," by P. Dutta
et al., 2019.
[59]"Automated Detection of Crop Diseases Using UAV Imagery: A Review," by A. M. Singh et
al., 2020.
[60]"A Review on Recent Advances in Deep Learning Techniques for Plant Disease Detection,"
by A. N. Jadhav et al., 2021.
Page 99 of 101