Foods 11 03483 v2
Foods 11 03483 v2
Communication
Computer Vision System for Mango Fruit Defect Detection
Using Deep Convolutional Neural Network
R. Nithya 1 , B. Santhi 1, *, R. Manikandan 1 , Masoumeh Rahimi 2 and Amir H. Gandomi 3, *
Abstract: Machine learning techniques play a significant role in agricultural applications for comput-
erized grading and quality evaluation of fruits. In the agricultural domain, automation improves the
quality, productivity, and economic growth of a country. The quality grading of fruits is an essential
measure in the export market, especially defect detection of a fruit’s surface. This is especially
pertinent for mangoes, which are highly popular in India. However, the manual grading of mango
is a time-consuming, inconsistent, and subjective process. Therefore, a computer-assisted grading
system has been developed for defect detection in mangoes. Recently, machine learning techniques,
such as the deep learning method, have been used to achieve efficient classification results in digital
image classification. Specifically, the convolution neural network (CNN) is a deep learning technique
that is employed for automated defect detection in mangoes. This study proposes a computer-vision
system, which employs CNN, for the classification of quality mangoes. After training and testing the
system using a publicly available mango database, the experimental results show that the proposed
method acquired an accuracy of 98%.
Citation: Nithya, R.; Santhi, B.;
Manikandan, R.; Rahimi, M.; Keywords: fruit defect detection; machine learning; deep learning; convolutional neural network; mango
Gandomi, A.H. Computer Vision
System for Mango Fruit Defect
Detection Using Deep Convolutional
Neural Network. Foods 2022, 11, 3483. 1. Introduction
https://doi.org/10.3390/foods11213483
Mango is a major fruit crop in India that is rich in vitamins and minerals [1]. The total
Academic Editor: Donatella Albanese production of mangoes is estimated to be 50.6 million tons worldwide, 39% of which
Received: 24 August 2022
occurs in India. Thailand and China are the next highest producers. Uttar Pradesh, Tamil
Accepted: 27 October 2022
Nadu, Telangana, Andhra Pradesh, Kerala, Bihar, and Karnataka are the main producers
Published: 2 November 2022
in India [2]. While mango production is increasing every year in India, the total export
of mangoes from India is very low due to the lack of nondestructive, quality and reliable
Publisher’s Note: MDPI stays neutral
automated tools and techniques. The quality of mangoes can be identified by their skin,
with regard to jurisdictional claims in
which displays green spots and yellowish speckles. In India, quality grading of mango
published maps and institutional affil-
fruit is done manually by experienced evaluators, which is based on defect detection of
iations.
the fruit’s surface. Manual sorting involves workers having to perform sensory tasks in
a large capacity and for long working hours. Thus, the manual grading of mango fruit is
a time-consuming process [3–5]. In addition, this process requires a significant number
Copyright: © 2022 by the authors.
of employees, which increases the cost of production and can lead to uncertainty and
Licensee MDPI, Basel, Switzerland. inaccurate results because individual judgments are subjective and inconsistent across fruit
This article is an open access article objects. Overcoming these limitations requires the help of computer image sensors to be
distributed under the terms and more effective and efficient. Therefore, we propose a reliable method for automatic defect
conditions of the Creative Commons detection of mangoes and a computer vision system for automated grading.
Attribution (CC BY) license (https:// The defect detection of mango fruits using a computer vision system includes three
creativecommons.org/licenses/by/ macro steps: preprocessing of an input image, feature extraction, and classification of the
4.0/). input image. The computer vision system in the agricultural domain enhances the quality
of food products for the export market, in which fruit grading is an essential process to
select quality fruits. Hence, there is a need for an intelligent computer vision system for
fruit grading. Over the past few years, non-destructive machine learning techniques have
been employed for efficient fruit quality assurance and have become an integral part of
developing computer vision systems for fruit defect detection [6,7]. In addition, image
processing and data mining techniques are widely employed in the agricultural domain
for computerized defect detection and quality grading of produce. The objective of this
work was to develop a computer vision system for mango defect detection using advanced
machine learning techniques, such as the convolutional neural network (CNN).
The remainder of the paper is organized as follows. A summary of the related works
is described in Section 2. In Section 3, the outline of the proposed model and methodology
are explained. Section 4 presents the performance measures to assess the effectiveness of
the proposed methodology. Section 5 discusses the experimental results, and Section 6
provides a comparative analysis of the proposed method and existing works. Section 7
contains concluding remarks about the proposed methodology and obtained results.
2. Related Work
Machine learning and image processing techniques have been extensively used in
the agricultural domain over the last few years. In particular, computer vision systems
provide a nondestructive, low-cost, fast and reliable means for fruit defect detection. Thus
far, several works have investigated automated fruit defect detection based on the fruit’s
surface.
Patel et al. [8] proposed a computer vision system for the non-destructive physical
characterization of mangoes considering various morphological features and multilin-
ear regression models for quality grading and obtained an accuracy of 97.9%. Similarly,
Patel et al. [9] developed a computer vision system for defect detection of mangoes using
a reflected ultraviolet imaging technique. They found that a band-pass filter of 400 nm is
appropriate to detect the defective mangoes which were not identified by the RGB color
camera. Nandi et al. [10] introduced a computer vision methodology for grading mangoes
based on maturity and quality. They used fuzzy incremental learning for grading mangoes
and obtained an accuracy of 87%. Nandi et al. [11] presented a machine vision system for
the prediction of mango maturity level. After various texture features were extracted from
mango images, the relevant features were selected by the recursive feature elimination
technique using SVM as a classifier. Huang et al. [12] proposed a computer vision method-
ology for the non-destructive detection of mango quality, which includes a colorimetric
sensor array, principal component analysis, and support vector classification for qualitative
discrimination. They classified the mangoes into three grades and obtained an accuracy of
97.5%. Guojinet et al. [13] introduced a computer vision technique for mango appearance
rank classification based on appearance characteristics using extreme learning machine
neural network to rank the mangoes. Sahuet et al. [14] developed an automated tool for
maturity and defect identification of mango fruits using digital image analysis considering
size, color, and shape features of the mangoes. Andrushia et al. [15] presented an auto-
matic skin disease identification system for mangoes in which extracted features such as
texture, color, and shape are selected from a digital image using the artificial bee colony
optimization and SVM as a classifier. Momin et al. [16] developed a computer vision system
for grading mango fruits based on geometry and shape features using image processing
techniques such as global thresholding, color binarization, median filter, and morphological
processing and achieved an accuracy of 97%.
Additionally, Ragavendra et al. [17] proposed an optimal wavelength selection method-
ology for mango defect detection and obtained an accuracy of 84.5%. Kumari et al. [18]
developed an automated system for mango defect detection using enhanced fuzzy k-means
clustering, maximally correlated principal component analysis, and back propagation-
based discriminant classifier. Patel et al. [19] presented a monochrome computer vision
system for mango defect detection, which achieved an accuracy of 97.88%.
Additionally, Ragavendra et al. [17] proposed an optimal wavelength selection
methodology for mango defect detection and obtained an accuracy of 84.5%. Kumari et
al. [18] developed an automated system for mango defect detection using enhanced
fuzzy k-means clustering, maximally correlated principal component analysis, and back
propagation-based discriminant classifier. Patel et al. [19] presented a monochrome
Foods 2022, 11, 3483
computer vision system for mango defect detection, which achieved an accuracy3 ofof11
97.88%.
3.3.Methodology
Methodology
Theproposed
The proposed CAD system
system includes
includesthe
thefollowing
following phases:
phases:(a)(a)
preprocessing to enhance
preprocessing to en-
the image
hance quality;quality;
the image (b) data(b)
augmentation to increase
data augmentation to the data samples,
increase the dataand (c) classification
samples, and (c)
of mango images
classification as either
of mango goodas
images or either
defective.
goodTheoroutline of the
defective. Theproposed
outline methods is shown
of the proposed
in Figureis1.shown in Figure 1.
methods
Overviewofofthe
Figure1.1.Overview
Figure theproposed
proposedmethodology.
methodology.
3.1. Dataset
3.1. Dataset
The dataset used in this study contains 50 good and 50 defective Kent mango images
The dataset used in this study contains 50 good and 50 defective Kent mango images
1024 × 1024 pixels in size. This database is publically available on the following website:
1024 × 1024 pixels in size. This database is publically available on the following website:
http://www.cofilab.com/portfolio/mangoesdb/ and accessed on 1 March 2022.
http://www.cofilab.com/portfolio/mangoesdb/ and accessed on 1st March 2022.
3.2. Preprocessing
3.2. Preprocessing
Preprocessing is an important step in the computer vision system that is performed to
Preprocessing
enhance is an Noise
image quality. important stepand
removal in the computer
image vision system
enhancement to showthat is performed
defective areas on
tothe
enhance
surface of fruits are preprocessing techniques considered in this work. defective ar-
image quality. Noise removal and image enhancement to show
eas on the surface of fruits are preprocessing techniques considered in this work.
3.2.1. Histogram Equalization
Histogram equalization (HE) is a widely used preprocessing technique to improve
the quality of the digital image. This technique is useful to improve the contrast of the
image with the histogram of the image. HE works by uniformly distributing the pixel
intensity values [20] which is accomplished by effectively distributing the most frequent
pixel values. This approach is suitable for images with foregrounds and backgrounds that
are both dark and bright. Let I be a given image represented as an m × n matrix of integer
pixel intensities ranging from 0 to L − 1. L is the number of grey levels in the image, often
256. Let p denote the normalized histogram of I with a bin for each possible intensity.
(a)
(b)
Figure 2. Cont.
Foods 2022, 11,
Foods 2022, 11, 3483
x FOR PEER REVIEW 5 5of
of 12
11
(c)
(d)
Figure 2. Images
Figure 2. Images ofof mango
mango samples:
samples:(a)
(a)Sample
Samplemango
mangoimages;
images;
(b)(b) Images
Images rotated
rotated at◦90°;
at 90 (c) Im-
; (c) Images
ages rotated at
◦ 180°; (d) Images rotated at
◦
rotated at 180 ; (d) Images rotated at 270 . 270°.
3.4.1.
3.4. ConvolutionNeural
Convolutional LayerNetwork
In CNN,
The the convolution
convolutional neurallayer applies
network a filter
(CNN) is to
anobtain
improved features from neural
artificial the input image
network
and that
[22] produce featureofmaps,
is capable or activation
classifying maps, as the
and recognizing output.
defect regionsTheinparameters
mango images usedvia
in
convolution
computer operation
vision are the
system. Thisfilter size network
neural F and stride S. The
works byfirst convolution
processing layer
visual produces
inputs and
low-level feature
performing mapsassuch
tasks such as edges,
object corners,
recognition, and lines, while
segmentation, the subsequent
and classification layers
of images
generate high-level feature maps. The input image
[23]. CNN is similar to a multilayer perceptron neural network.of size W × H × C (width, height,
channels) is convolved with N kernel size of k × k × D, where D is the number of RGB
channels
3.4.1. and k is less
Convolution than the image dimensions [24]. The convolution operations with N
Layer
kernels
In CNN, the convolutionThe
generate N features. convolution
layer operations
applies a filter startfeatures
to obtain from the top-left
from cornerim-
the input of
the image and are repeated until the kernel reaches the bottom-right corner.
age and produce feature maps, or activation maps, as the output. The parameters used in
convolution
3.4.2. Poolingoperation
Layer are the filter size F and stride S. The first convolution layer pro-
duces low-level feature maps such as edges, corners, and lines, while the subsequent
The pooling layer, also known as the down sampling layer, reduces feature maps
layers generate high-level feature maps. The input image of size W × H × C (width,
and the computational complexity of the CNN. This pooling operation is applied after
height, channels) is convolved with N kernel size of k × k × D, where D is the number of
a convolution by performing spatial invariance. The pooling operation minimizes the
RGB channels and k is less than the image dimensions [24]. The convolution operations
dimension of each feature map while preserving the most important features. The most
with N kernels generate N features. The convolution operations start from the top-left
common approaches employed in pooling operations are average pooling and max-pooling.
corner of the image and are repeated until the kernel reaches the bottom-right corner.
In average pooling, the average of the values in the region of the feature map covered by
the filter is used, whereas max-pooling takes the maximum value from the region of the
3.4.2. Pooling Layer
feature map covered by the filter. In this study, max-pooling was employed.
The pooling layer, also known as the down sampling layer, reduces feature maps
and
3.4.3.the computational
Fully complexity of the CNN. This pooling operation is applied after a
Connected Layer
convolution by performing
This layer uses the featurespatial invariance.
obtained from The pooling layer
the pooling operation minimizes
for the the di-
classification of
mension of each feature map while preserving the most important features.
input images. The fully connected layer flattened out the pooling output to a large vector. The most
common approaches
The last fully connectedemployed
layer uses in poolingactivation
a softmax operations are average
function to compute pooling and
an output
max-pooling.
from 0 to 1 forIn average
each of the pooling, the average
input images of the
to predict. Onevalues
or morein fully
the region of the
connected feature
layers are
map covered by the filter is used,
at the end of the CNN architecture. whereas max-pooling takes the maximum value from
the region of the feature map covered by the filter. In this study, max-pooling was em-
3.4.4. Rectified Linear Units (ReLU)
ployed.
CNN applies a ReLU activation function to the convolved feature after every convo-
3.4.3.
lutionFully Connected
operation Layer
in order to introduce nonlinearity into the model. ReLU is an activation
function
This layer uses the featurethe
that is used to improve trainingfrom
obtained of deep
the convolutional
pooling layerneural
for thenetworks. The ad-
classification of
vantage of the ReLU activation function is faster training [25].
input images. The fully connected layer flattened out the pooling output to a large vector.
The last fully connected layer uses a softmax activation function to compute an output
from 0 to 1 for each of the input images to predict. One or more fully connected layers are
at the end of the CNN architecture.
Foods 2022, 11, 3483 6 of 11
Figure 3.
Figure 3. The
The proposed
proposed CNN
CNN architecture
architecture model.
model.
4. Performance
Table Measures
1. The proposed CNN architecture model.
A classification test and k-fold cross-validation were performed to determine the ef-
Number of
Layers Type ficiency of the proposedNumber
methodofatNeurons
identifyingSize of the Kernel Involves
mangoes as either good or defective.
Stride In
Feature Maps in the Layer to form Each Feature Map
k-fold cross-validation, all images were utilized to train and test the proposed model,
0 Input 3
whereby all data samples224 × 224
were ×3
randomly - into k groups [26]. One-fold
divided - was used
1 Convolution 16 222 × 222 × 16 3×3×3 1
for testing, and other k − 1 folds were employed for training. This process is repeated for
2 Max-pooling 16 111 × 111 × 16 2×2 2
3 Convolution other32k − 1 folds. In this109
work, 10-fold
× 109 × 32 cross-validation
3 × 3 × 16is used. In the classification
1 test,
4 Max-pooling the presence
32 of a defective mango
55 × 55 × 32 is positive, whereas
2×2 absence of a defect 2is negative.
5 Convolution Four32different outcomes53were× 53 possible
× 32 as described
3 × 3 ×in32the confusion matrix1 in Table 2:
6 Max-pooling True32Negative (TN), True × 27 × 32(TP), False Negative
27Positive 2×2 2 (FP) [27].
(FN), and False Positive
7 Convolution 64 25 × 25 × 64 3 × 3 ×
The classification performance was estimated by various measures, including1 area under
32
8 Max-pooling 64 23 × 23 × 64 3 × 3 × 64 1
the ROC (Receiver Operating Characteristic) curve (AUC), accuracy, sensitivity, and
9 Convolution 64 12 × 12 × 64 2×2 2
10 Convolution specificity
128 based on the four
10 × possible
10 × 128 outcomes,3which× 3 × 64are calculated as follows:
1
11 Max-pooling • 128 × 8classified
TP—Mango images8are × 128 3 × 3 × 128
as defective. 1
12 Fully Connected • -FP—Mango images128 -
are misclassified as good. -
13 Fully Connected - 64 - -
• TN—Mango images are classified as good.
14 Output - 2 - -
• FN—Mango images are misclassified as defective.
4. Performance Measures
A classification test and k-fold cross-validation were performed to determine the
efficiency of the proposed method at identifying mangoes as either good or defective.
In k-fold cross-validation, all images were utilized to train and test the proposed model,
whereby all data samples were randomly divided into k groups [26]. One-fold was used
for testing, and other k − 1 folds were employed for training. This process is repeated
for other k − 1 folds. In this work, 10-fold cross-validation is used. In the classification
test, the presence of a defective mango is positive, whereas absence of a defect is negative.
Four different outcomes were possible as described in the confusion matrix in Table 2:
True Negative (TN), True Positive (TP), False Negative (FN), and False Positive (FP) [27].
The classification performance was estimated by various measures, including area under the
ROC (Receiver Operating Characteristic) curve (AUC), accuracy, sensitivity, and specificity
based on the four possible outcomes, which are calculated as follows:
• TP—Mango images are classified as defective.
• FP—Mango images are misclassified as good.
• TN—Mango images are classified as good.
• FN—Mango images are misclassified as defective.
Accuracy = (TP + TN)/(TP + FP + TN + FN)
Sensitivity = TP/(TP + FN)
Specificity = TN/(TN + FP)
Table 2. Performance measures of the proposed method.
5. Experimental Results
In this work, we employed a deep learning method and binary classification to identify
defective mangoes using a proposed computer vision system. Images of Kent mangoes
were obtained from a publically available web database. The size of the Kent mango dataset
images is 1028 × 1028 pixels. The data augmentation technique was applied to the database
to artificially increase the number of mango images from 100 to 800 images which were used
for the performance evaluation of the proposed system. In this system, human intervention
is not required to obtain features from the input images. Section 5: The proposed CNN
model is implemented in MATLAB and a computer with processor @2.83 GHz and 8 GB
Foods 2022, 11, 3483 8 of 11
RAM. The proposed CNN model was trained with an Adam optimizer, 10 epochs and a
different learning rate of 0.1, 0.01, 0.001, and 0.0001. The highest classification accuracy
was obtained with a learning rate of 0.001 and a batch size of 32. The preprocessing
technique, namely histogram equalization, was used to enhance the input images. Standard
performance measures such as accuracy, recall, and precision were determined and used
to assess the proposed system, and 10-fold cross-validation was employed for evaluation.
An optimal fit is the objective of the proposed deep learning model that exists between an
underfit and overfit model. An optimal fit is identified by a training and validation loss
which is depicted in Figure 4.
Figure Training
4.Training
Figure 4. loss
loss vs.vs. validation
validation loss. loss.
022, 11, x FOR PEER REVIEW 9 of 12
The confusion matrix of 10-fold cross-validation for Kent mango defect detection is
shown in Table 2. In this work, a positive value indicates defective mangoes, whereas
good quality mangoes are considered negative. The classification performance measures
(sensitivity, accuracy and specificity) of the proposed model in Table 2 reveal that less
(sensitivity, accuracy and specificity) of the proposed model in Table 2 reveal that less than
than 3% of the mangoes were misclassified in all folds. The average accuracy of the pro-
3% of the mangoes were misclassified in all folds. The average accuracy of the proposed
posed model was 98.5%, suggesting that it can efficiently identify defective mangoes.
model was 98.5%, suggesting that it can efficiently identify defective mangoes. Figure 5
Figure 5 presents the classification results of 10-folds. Figure 6 shows the ROC curve of
presents the classification results of 10-folds. Figure 6 shows the ROC curve of the proposed
the proposed method, which describes the classification ability of the binary classifiers
method, which describes the classification ability of the binary classifiers and the obtained
and the obtained AUC value of 0.98.
AUC value of 0.98.
100
99.5
Classification Accuracy (%)
99
98.5
98
97.5
97
96.5
96
1 2 3 4 5 6 7 8 9 10
Fold Number
96
1 2 3 4 5 6 7 8 9 10
Fold Number
Foods 2022, 11, 3483 9 of 11
ROCcurve
Figure6.6.ROC
Figure curvefor
formango
mangodefect
defectdetection
detectionusing
usingthe
theproposed
proposedCNN
CNNmodel.
model.
6. Discussion
The proposed computer vision system is a simple and efficient tool for the automated
detection of defective mangoes using advanced machine learning techniques. Experiments
were carried out on a dataset of 800 mangoes. Because CNN requires a maximum number of
images for better classification, data augmentation was performed to increase the number of
data samples. In addition, the proposed model produced consistent results for all iterations.
Table 3 summarizes the various approaches proposed by the researchers for automated
defect detection of mango, including image processing techniques. Compared to related
works, the classification result of the proposed deep learning model obtained the highest
classification accuracy. As seen in the confusion matrix, the proposed CNN model correctly
classified the good quality and defective mangoes.
Moreover, classification on the Kent mango database also demonstrates that the pro-
posed model obtained good classification accuracy. One of the most important advantages
of the proposed deep learning method is that, unlike the traditional machine learning
model, it does not need segmentation, feature extraction, or selection processes. However,
a disadvantage of the proposed deep learning model is that the training is computationally
expensive and requires a large amount of data. The small dataset is one of the major
challenges to train the deep learning model. Therefore, we applied data augmentation to
obtain a larger dataset.
Foods 2022, 11, 3483 10 of 11
Training the deep learning model is essential to increasing its classification perfor-
mance. Herein, the experimental results reveal that normal mangoes were efficiently
distinguished from defective mangoes. Importantly, the proposed computer vision system
can be used in export marketing to improve the objective evaluation of quality mangoes.
It can also be used in retail stores to ensure the quality of the mangoes. To the best of our
knowledge, this is the first study to propose a deep learning model for detecting mango
defects. In future studies, the proposed deep learning methodology can be employed to
develop a generalized computer vision system for defect identification of various fruits
and vegetables.
7. Conclusions
This study aimed to develop a computer vision system for defect identification in
mangoes using advanced machine learning techniques, which greatly benefits countries
that demand enhanced export marketing of this fruit. The proposed system, which employs
the CNN deep learning model, was evaluated on 800 images of mangoes and obtained a
classification accuracy of 98.5%. The experimental results show that the proposed model can
efficiently detect defective mangoes. This computerized system is developed to replace the
manual evaluation of mango fruit, providing automated non-destructive defect detection.
Therefore, the developed computer vision system is useful for the evaluators to easily
detect defect in mangoes.
Author Contributions: Drafting, R.N.; collecting the dataset, B.S.; experimental analysis, B.S., R.M.
and M.R.; proposing the new method or methodology, R.N.; and proofreading, B.S., R.M., M.R.
and A.H.G.; supervising, M.R. and A.H.G. All authors have read and agreed to the published version
of the manuscript.
Funding: This research was funded by University of Technology Sydney Internal Fund for A.H.G.
Data Availability Statement: Data will be shared for review based on the editorial reviewer’s request.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Zhen, O.P.; Hashim, N.; Maringgal, B. Quality evaluation of mango using non-destructive approaches: A review. J. Agric. Food
Eng. 2020, 1, 0003.
2. Verma, M.K.; Srivastav, M.; Usha, K. Calender of Operations for Mango Cultivation. Division of Fruits and Horticultural Technology;
ICAR-Indian Agricultural Research Institute: New Delhi, India, 2015.
3. Sadegaonkar, V.D.; Wagh, K.H. Quality inspection and grading of mangoes by computer vision & Image Analysis. Int. J. Eng. Res.
Appl. 2013, 3, 1208–1212.
4. Bhargava, A.; Bansal, A. Fruits and vegetables quality evaluation using computer vision: A review. J. King Saud Univ.-Comput. Inf.
Sci. 2021, 33, 243–257. [CrossRef]
5. Nagle, M.; Intani, K.; Romano, G.; Mahayothee, B.; Sardsud, V.; Müller, J. Determination of surface color of ‘all yellow’mango
cultivars using computer vision. Int. J. Agric. Biol. Eng. 2016, 9, 42–50.
6. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification.
Sustainability 2022, 14, 6339. [CrossRef]
7. Vélez-Rivera, N.; Blasco, J.; Chanona-Pérez, J.; Calderón-Domínguez, G.; de Jesús Perea-Flores, M.; Arzate-Vázquez, I.; Farrera-
Rebollo, R. Computer vision system applied to classification of “Manila” mangoes during ripening process. Food Bioprocess
Technol. 2014, 7, 1183–1194. [CrossRef]
8. Patel, K.K.; Kar, A.; Khan, M.A. Development and an application of computer vision system for nondestructive physical
characterization of mangoes. Agric. Res. 2020, 9, 109–124. [CrossRef]
9. Patel, K.K.; Kar, A.; Khan, M.A. Potential of reflected UV imaging technique for detection of defects on the surface area of mango.
J. Food Sci. Technol. 2019, 56, 1295–1301. [CrossRef]
10. Nandi, C.S.; Tudu, B.; Koley, C. A machine vision technique for grading of harvested mangoes based on maturity and quality.
IEEE Sens. J. 2016, 16, 6387–6396. [CrossRef]
11. Nandi, C.S.; Tudu, B.; Koley, C. A machine vision-based maturity prediction system for sorting of harvested mangoes. IEEE Trans.
Instrum. Meas. 2014, 63, 1722–1730. [CrossRef]
12. Huang, X.; Lv, R.; Wang, S.; Aheto, J.H.; Dai, C. Integration of computer vision and colorimetric sensor array for nondestructive
detection of mango quality. J. Food Process Eng. 2018, 41, e12873. [CrossRef]
Foods 2022, 11, 3483 11 of 11
13. Guojin, L.; Diyong, D.; Shuang, C. Research on Mango Detection and Classification by Computer Vision. J. Agric. Mech. Res. 2015,
10, 4.
14. Sahu, D.; Potdar, R.M. Defect identification and maturity detection of mango fruits using image analysis. Am. J. Artif. Intell. 2017,
1, 5–14.
15. Andrushia, A.D.; Trephena, P.A. Artificial bee colony based feature selection for automatic skin disease identification of mango
fruit. In Nature Inspired Optimization Techniques for Image Processing Applications; Springer: Cham, Switzerland, 2019; pp. 215–233.
16. Momin, M.A.; Rahman, M.T.; Sultana, M.S.; Igathinathane, C.; Ziauddin, A.T.M.; Grift, T.E. Geometry-based mass grading of
mango fruits using image processing. Inf. Process. Agric. 2017, 4, 150–160. [CrossRef]
17. Raghavendra, A.; Guru, D.S.; Rao, M.K. Mango internal defect detection based on optimal wavelength selection method using
NIR spectroscopy. Artif. Intell. Agric. 2021, 5, 43–51. [CrossRef]
18. Kumari, N.; Belwal, R. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant
analyzer. Multimed. Tools Appl. 2021, 80, 4943–4973. [CrossRef]
19. Patel, K.K.; Kar, A.; Khan, M.A. Monochrome computer vision for detecting common external defects of mango. J. Food Sci.
Technol. 2021, 58, 4550–4557. [CrossRef]
20. Xie, Y.; Ning, L.; Wang, M.; Li, C. Image enhancement based on histogram equalization. J. Phys. Conf. Ser. 2019, 1314, 012161.
[CrossRef]
21. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [CrossRef]
22. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model
to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [CrossRef]
23. Acharya, U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Tan, R.S. Deep convolutional neural network for the
automated diagnosis of congestive heart failure using ECG signals. Appl. Intell. 2019, 49, 16–27. [CrossRef]
24. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification.
ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [CrossRef]
25. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 2020, 48,
1875–1897.
26. Gebrehiwot, A.; Hashemi-Beni, L.; Thompson, G.; Kordjamshidi, P.; Langan, T.E. Deep convolutional neural network for flood
extent mapping using unmanned aerial vehicles data. Sensors 2019, 19, 1486. [CrossRef]
27. Ruuska, S.; Hämäläinen, W.; Kajava, S.; Mughal, M.; Matilainen, P.; Mononen, J. Evaluation of the confusion matrix method in the
validation of an automated system for measuring feeding behaviour of cattle. Behav. Process. 2018, 148, 56–62. [CrossRef]