Inventions 09 00006
Inventions 09 00006
Article
A Modified Xception Deep Learning Model for Automatic
Sorting of Olives Based on Ripening Stages
Seyed Iman Saedi 1 and Mehdi Rezaei 2, *
Abstract: Olive fruits at different ripening stages give rise to various table olive products and oil
qualities. Therefore, developing an efficient method for recognizing and sorting olive fruits based on
their ripening stages can greatly facilitate post-harvest processing. This study introduces an automatic
computer vision system that utilizes deep learning technology to classify the ‘Roghani’ Iranian olive
cultivar into five ripening stages using color images. The developed model employs convolutional
neural networks (CNN) and transfer learning based on the Xception architecture and ImageNet
weights as the base network. The model was modified by adding some well-known CNN layers to
the last layer. To minimize overfitting and enhance model generality, data augmentation techniques
were employed. By considering different optimizers and two image sizes, four final candidate models
were generated. These models were then compared in terms of loss and accuracy on the test dataset,
classification performance (classification report and confusion matrix), and generality. All four
candidates exhibited high accuracies ranging from 86.93% to 93.46% and comparable classification
performance. In all models, at least one class was recognized with 100% accuracy. However, by taking
into account the risk of overfitting in addition to the network stability, two models were discarded.
Finally, a model with an image size of 224 × 224 and an SGD optimizer, which had a loss of 1.23 and
an accuracy of 86.93%, was selected as the preferred option. The results of this study offer robust
tools for automatic olive sorting systems, simplifying the differentiation of olives at various ripening
levels for different post-harvest products.
Citation: Saedi, S.I.; Rezaei, M. A
Modified Xception Deep Learning Keywords: olive; color image; Xception; sorting; deep learning
Model for Automatic Sorting of Olives
Based on Ripening Stages. Inventions
2024, 9, 6. https://doi.org/10.3390/
inventions9010006 1. Introduction
Academic Editor: Chung-Der Hsiao Olive, Olea europaea, is an essential evergreen subtropical fruit. Its fruits are utilized
for both table olives and olive oil. Certain varieties are specifically cultivated for oil
Received: 10 November 2023
Revised: 13 December 2023
production, while others, renowned for their larger fruit sizes, are preferred for canning
Accepted: 14 December 2023
products. Moreover, the production of dual-purpose olive varieties is growing [1]. In
Published: 31 December 2023
Iran, the ‘Roghani’ cultivar stands out as a vital local dual-purpose olive variety, known
for its adaptability to diverse environmental conditions and ability to withstand winter
cold [2]. The type of canned olives and the quality of olive oil depend on various factors,
including the variety, cultivation conditions, and fruit ripening stage [3–5]. Olive fruits can
Copyright: © 2023 by the authors. be harvested at different stages, ranging from immature green to fully mature black, and
Licensee MDPI, Basel, Switzerland. even during over-ripened stages. The ripening stage of the fruit profoundly affects the oil
This article is an open access article content, chemical composition, sensory characteristics of olive oil, and industrial yield [6,7].
distributed under the terms and
Fruit homogeneity at the same ripening stage is crucial for canned olives, and the quality
conditions of the Creative Commons
of olive oil directly depends on the fruit’s ripening stage.
Attribution (CC BY) license (https://
The timing of olive harvesting is typically determined by evaluating the maturity
creativecommons.org/licenses/by/
index (MI) of each olive cultivar [5,8,9]. This evaluation of MI is based on changes in both
4.0/).
the skin and flesh color of mature fruit [10]. Decisions about when to harvest fruit from an
orchard are made by conducting MI assessments on fruit samples collected from different
trees. However, it is common to come across olives with varying degrees of ripeness during
processing, as mechanical harvesters that use trunk shakers can harvest one hectare of an
intensive olive grove (consisting of 300–400 trees) within a timeframe of 2 to 5 days [10].
Due to factors such as the location of the fruit on outer or inner branches and exposure to
sunlight, even a single tree may have olives in different stages of maturity, and there may be
variations between each tree due to differences in horticultural practices and management.
Moreover, some orchards may cultivate multiple olive varieties, each with distinct ripening
stages during harvest, while others with a single cultivar may also have variations in fruit
ripeness. In olive processing facilities, it is possible for different growers to bring olives
with varying degrees of ripeness that must be categorized before processing.
Given the importance of olive ripening in the production of various post-harvest prod-
ucts, such as pickles, oil, and canned olives, it is essential to separate and sort olive fruits
before processing. However, manually sorting olives through human visual inspection
is a challenging and inefficient task. To address this challenge, integrating a computer
vision system into olive processing units as part of the automatic separation machinery
offers a potential solution. The system consists of an image-capturing unit, which relies
on a robust image processing model to ensure rapid and accurate results for mechanical
separation [11].
Numerous researchers have investigated various methods for assessing olive fruit
maturity, with a focus on Near Infrared Spectroscopy (NIRS) [10,12,13]. These studies
aimed to predict diverse quality parameters and characterize table olive traits utilizing
NIRS technology. In addition to NIRS, Convolutional Neural Networks (CNNs), a subset
of deep learning, have emerged as a powerful tool for image processing tasks, allowing for
the extraction of high-level features independent of imaging condition and structure [14],
making them a valuable tool for agricultural applications.
The use of cutting-edge technologies, such as deep learning, offers a more promising
solution to address this challenge, garnering the attention of scientists across multiple
agricultural domains [15–17]. Noteworthy applications of CNNs include olive classification,
as demonstrated by Riquelme et al. [18], who employed discriminant analysis to classify
olives based on external damage in images, achieving validation accuracies ranging from
38% to 100%. Guzmán et al. [9] leveraged algorithms based on color and edge detection for
image segmentation, resulting in an impressive 95% accuracy in predicting olive maturity.
Ponce et al. [19] utilized the Inception-ResNetV2 model to classify seven olive fruit varieties,
achieving a remarkable maximum accuracy of 95.91%. Aguilera Puerto et al. [20] developed
an online system for olive fruit classification in the olive oil production process, employing
Artificial Neural Networks (ANN) and Support Vector Machines (SVM) to attain high
accuracies of 98.4% and 98.8%, respectively. Aquino et al. [21] created an artificial vision
algorithm capable of classifying images taken in the field to identify olives directly from
trees, enabling accurate yield predictions. Studies such as Khosravi et al. [17] have also
utilized RGB image acquisition and CNNs for the early estimation of olive fruit ripening
stages on-branch, which has direct implications for orchard production quality and quantity.
Furferi et al. [22] proposed an ANN-based method for automatic maturity index evaluation,
considering four classes based on olive skin and pulp color, while ignoring the presence of
defects. In contrast, Puerto et al. [23] implemented a static computer vision system for olive
classification, employing a shallow learning approach using an ANN with a single hidden
layer. In a recent study by Figorilli et al. [24], olive fruits were classified based on the state
of external color, “Veraison”, and the presence of visible defects using AI algorithms with
RGB imaging. Despite the commendable efforts in olive fruit detection modeling, however,
previous studies have primarily focused on identifying defective olives, inadvertently
overlooking the comprehensive assessment of distinct stages crucial for olive fruit ripening,
impacting both oil quality and canned olive production [18,20]. This trend, compounded
by the reliance on limited datasets, has significantly hindered the models’ capacity for
Inventions 2024, 9, 6 3 of 14
Inventions 2024, 9, 6 discriminating between ripening stages. Specifically, Stage 1 refers to samples with green
4 of 14
colors, Stage 2 is characterized by olives with 10–30% browning, while Stages 3, 4, and 5
represent approximately 50, 90, and 100% browning (fully black), respectively. The num-
ber of images
of images taken
taken at each
at each ripening
ripening stage,
stage, andand
thethe average
average mass
mass of samples
of samples at each
at each class,
class, are
are presented in Table
presented in Table 1. 1.
Figure
Figure1.1.Adjusted
Adjustedimages
imagesofofthe
thefive
fiveripening
ripeningstages
stagesofofRoghani
Roghaniolives
olivesunder
underour
ourstudy.
study.Each
Eachclass
class
has been denoted by an assigned code (O1–O5).(Original pictures)
has been denoted by an assigned code (O1–O5).(Original pictures).
Table
Table1.1.The
Thenumber
numberof ofimages
imagestaken
takenat
ateach
eachstage
stageof
ofolive
olivefruit
fruitmaturity
maturityand
andthe
theaverage
averagemass
massof
of
samples at each class.
samples at each class.
Classes O1 O2 O3 O4 O5
Classes O1 O2 O3 O4 O5
Number of Samples 195 161 183 93 129
Number of Samples 195 161 183 93 129
Average Mass (g) 4.05 ± 1.30 2.93 ± 0.90 3.00 ± 0.78 3.22 ± 0.67 3.74 ± 3.29
Average Mass (g) 4.05 ± 1.30 2.93 ± 0.90 3.00 ± 0.78 3.22 ± 0.67 3.74 ± 3.29
The image data was divided into three distinct parts: the training set, the validation
set, andThe the testing
image set.was
data To accomplish
divided intothis, 20%
three of the parts:
distinct total data
the (equivalent
training set,tothe153 images)
validation
were assigned
set, and to theset.
the testing testTodataset.
accomplishThe remaining
this, 20% ofdata consisted
the total of 609 images,
data (equivalent with
to 153 15%
images)
(92
were images)
assigned being allocated
to the for theThe
test dataset. validation
remaining set,data
andconsisted
the remaining
of 609516 images
images, with being
15%
utilized
(92 images)for the training
being set. for the validation set, and the remaining 516 images being
allocated
The for
utilized training processset.
the training involved passing the input data through several layers, obtain-
ing the Theoutput, andprocess
training comparing it with
involved the desired
passing output.
the input data The difference
through severalbetween the two,
layers, obtaining
which servedand
the output, as the error, was
comparing then calculated.
it with the desiredUsingoutput. thisThe
error, the network
difference parameters
between the two,
which
were served and
adjusted as thefederror, was back
the data then into
calculated. Usingtothis
the network error, the
compute newnetwork parameters
results and errors.
wereprocess
This adjusted and
was fed themultiple
repeated data back into the
times, network
adjusting thetoparameters
compute new afterresults and errors.
each iteration to
This process
minimize thewas
error.repeated
There are multiple times,
various adjusting
formulas andthe parameters
functions after each
to calculate theiteration
network to
minimize the error. There are various formulas and functions
error. Once the error was computed, the parameters were updated to move closer to min-to calculate the network error.
Once the
imizing it;error was
that is, computed,the
optimizing theweights
parameters were updated
to achieve the lowest to possible
move closer to minimizing
error.
it; that is, optimizing
Preprocessing thethe weights
input imagesto achieve
is crucialthetolowest
enhance possible error. accuracy, prevent
the model’s
Preprocessing
overfitting, and boostthe itsinput images iscapability.
generalization crucial to enhance
First, wethe model’s
resized accuracy,
all images prevent
to two dif-
overfitting,
ferent sizes: 224 and ×boost
224 and its generalization
299 × 299. Next, capability.
we normalized First, we
the resized all images
pixel values to two
by dividing
different
them by thesizes: 224 × 224
maximum pixeland 299 ×of299.
values Next, we normalized
the captured the pixel values
images. Subsequently, by dividing
we applied data
them by the maximum pixel values of the captured images.
augmentation techniques, including random translation, random flip, random contrast,Subsequently, we applied data
augmentation techniques, including random translation, random flip, random contrast, and
random rotation, to artificially increase the number of images used in model development.
The data augmentation parameters are presented in Table 2.
and random rotation, to artificially increase the number of images used in model devel-
Inventions 2024, 9, 6 opment. The data augmentation parameters are presented in Table 2. 5 of 14
To develop the deep neural network model, we utilized the transfer learning tech-
To develop
nique. Initially, the deepthe
we invoked neural network
Xception model,
model weloaded
and utilizedits
theweights
transfer learning
from thetechnique.
ImageNet
Initially, we invoked the Xception model and loaded its weights from the ImageNet dataset.
dataset. Subsequently, we embarked on a fine-tuning process by adding additional layers
Subsequently, we embarked on a fine-tuning process by adding additional layers to the
to the base model. Diverse structures for the fine-tuning layers were experimented with,
base model. Diverse structures for the fine-tuning layers were experimented with, varying
varying their
their type,
type, position,
position, and and arguments
arguments to identify
to identify the optimal
the optimal configuration.
configuration. We ex-
We explored
ploredseveral
several layer
layer typestypes and arrangements,
and arrangements, with the with the most commonly
most commonly used being 2D used being 2D
convolution,
convolution, Global Average Pooling, Dropout, Batch Normalization,
Global Average Pooling, Dropout, Batch Normalization, and others. The comprehensive and others. The
comprehensive
architecturearchitecture of the
of the resulting resulting
model model
is illustrated inis illustrated
Figure 2. in Figure 2.
FigureFigure
2. The2.architecture of theofmodified
The architecture Xception
the modified deep
Xception learning
deep model
learning modelproposed
proposedfor
forthe
thestudy
study
(Shapes with similar outlines correspond to layers with similar properties).
(Shapes with similar outlines correspond to layers with similar properties).
Inventions 2024, 9, 6 6 of 14
Table 3. The detailed properties of the CNN architecture (Modified Xception) for two different input
image sizes.
Layer (Type) Output Shape (Input Size = 224 × 224) Output Shape (Input Size = 299 × 299)
Xception Block (None, 7, 7, 2048) (None, 10, 10, 2048)
Convolution 2D (None, 7, 7, 128) (None, 10, 10, 128)
Batch Normalization (None, 7, 7, 128) (None, 10, 10, 128)
Max Pooling 2D (None, 4, 4, 128) (None, 5, 5, 128)
Dropout (None, 4, 4, 128) (None, 5, 5, 128)
Convolution 2D (None, 4, 4, 64) (None, 5, 5, 64)
Batch Normalization (None, 4, 4, 64) (None, 5, 5, 64)
Max Pooling 2D (None, 2, 2, 64) (None, 3, 3, 64)
Dropout (None, 2, 2, 64) (None, 3, 3, 64)
Convolution 2D (None, 2, 2, 32) (None, 3, 3, 32)
Batch Normalization (None, 2, 2, 32) (None, 3, 3, 32)
Max Pooling 2D (None, 1, 1, 32) (None, 2, 2, 32)
Dropout (None, 1, 1, 32) (None, 2, 2, 32)
Inventions 2024, 9, 6 7 of 14
Table 3. Cont.
Layer (Type) Output Shape (Input Size = 224 × 224) Output Shape (Input Size = 299 × 299)
Dense (None, 1, 1, 254) (None, 2, 2, 254)
Dense (None, 1, 1, 128) (None, 2, 2, 128)
Dense (None, 1, 1, 64) (None, 2, 2, 64)
Global Average Pooling 2D (None, 64) (None, 64)
Dense (None, 5) (None, 5)
Total Parameters: 27,721,803
Trainable Parameters: 27,577,643
Non-trainable Parameters: 144,160
Model 1 Model 2
Model 3 Model 4
Inventions 2023, 8, x FOR PEER REVIEW 9 of 14
Figure 3. Loss trends for the four final candidate models.
Figure 3. Loss trends for the four final candidate models.
Model 1 Model 2
Model 3 Model 4
Figure 4. Accuracy trends for the four final candidate models.
Figure 4. Accuracy trends for the four final candidate models.
Table 4. The values and epochs of minimum training and validation losses, as well as maximum
training and validation accuracies for each candidate model.
According to Table 5, both image sizes can provide low losses and high accuracies.
Model 1 has the lowest test loss (0.3938) and highest test accuracy (0.9346), indicating
good performance on the test set. However, test loss and test accuracy should not be the
only factors considered when evaluating a CNN model, as another important factor is the
risk of overfitting, which affects the model’s generalization ability to new, unseen data.
Therefore, the best model should be chosen based on a trade-off between test loss, test
accuracy, and the possibility of overfitting. Model 1 cannot be the final choice since it
possesses the possibility of overfitting during the training process, which may result in
poor generalization to new data (Figures 4 and 5). Model 2 has a higher test loss (1.2338)
and lower test accuracy (0.8693) compared to Model 1, but it does not show any signs of
overfitting (Figures 4 and 5), indicating better generalization to new data. Model 3 has a
slightly higher test loss (0.5502) and lower test accuracy (0.9085) than Model 1, but like
Model 1, it displays the possibility of overfitting during training. Model 4 and Model 2
do not have an overfitting issue. Model 4 has a significantly higher test loss (3.8232) but a
comparable value of test accuracy, indicating poor performance on the test set.
To evaluate the performance of the candidate models in discriminating between the
different classes (O1 to O5), we utilized four parameters: true positives (TP), true negatives
(TN), false positives (FP), and false negatives (FN). These parameters are used to calculate
two classification metrics: classification report and confusion matrix. The classification
report provides information about the performance of a model through precision, recall,
and F1-score, as described by Equations (1)–(3). Precision measures how well the model
predicts positive cases, while recall measures the proportion of correctly predicted positive
instances out of the total actual positive instances. The F1-score is the harmonic mean of
precision and recall, providing a balanced measure that combines both metrics.
TP
Precision = . (1)
TP + FP
1 had the least confusion, while Model 2 had the most. For O3, Models 2 and 4 successfully
recognized it, but Models 1 and 3 mixed up a few instances with O2 and O4. In relation to
class 4, Models 1 and 2 performed similarly, each with only three instances confused with
O3. Model 4 showed the weakest performance in classifying O4, while Model 3 performed
Inventions 2024, 9, 6
the best, mistakenly classifying just one instance as O3. Lastly, in the case of O5,
10 ofModel
14 1
achieved the highest accuracy with 100%, but Model 2 misidentified four O5 instances as
O4. Model 3 confused one O5 instance with O3 TPand four with O4, while Model 4 misclas-
sified one instance with both O3 Recall
and O4.= To summarize,
TP + FN
. (2)
the models generally demon-
strated good recognition for classes O1 and O5, which can be attributed to the minimal
Precision × Recall
visual similarities compared − the
F1to = 2 ×classes.
scoreother . (3)
Precision + Recall
Model 1 Model 2
Model 3 Model 4
Figure 5. 5.
Figure Confusion
Confusionmatrix
matrix for
for the fourfinal
the four finalcandidate
candidate models.
models.
Theproposed
Our classification report for
network forrecognizing the five
detecting olive olive ripening
fruits classes under study
stage is presented com-
demonstrates
in Table 6. According to this table, Model 1, 3, and 4 achieved a precision
petitive performance metrics, exhibiting similar or enhanced accuracy or precision value of 1.00 com-
for the O1 class, while Model 2 had a slightly lower precision of 0.97. This indicates that
pared to prior studies such as Guzmán et al. [9] in predicting olive maturity using specific
all models performed well in predicting the O1 class. In recognizing class O2, Models 2
algorithms, and Puerto et al. [20] for classifying Veraison and visible defects of olive fruit
and 4 achieved a perfect accuracy of 100%, while Models 1 and 3 had an accuracy of 91%.
under an online system.
Class O3 was better The robustness
identified by Models 1of
andthe Xception
3, with deep learning
an accuracy of 89% formethod in image-
both models,
whereas Models 2 and 4 had a lower accuracy of 73% and 69%, respectively. For class O4,
all models performed similarly, with an accuracy ranging from 87% to 93%. Finally, class
O5 was identified perfectly by all models, except for Model 4, which had an accuracy of
96%. The performance of all models in identifying classes O1 and O5 was nearly perfect,
likely due to their distinct visual properties.
To assess the models’ ability to avoid false negatives, we can compare their recall
values. Models 2, 3, and 4 correctly predicted all O1 instances as O1, meaning they had
zero false negatives. Model 1 had a recall of at least 90% for class O1. For class O2, Model 2
had a lower recall of 59%, while Models 1, 3, and 4 achieved recalls of 0.94, 0.91, and 0.81,
respectively. Models 2 and 4 were perfect in predicting class O3, while Models 1 and 3 had
a high accuracy. In case of class O4, Model 4 performed weakly with a recall of 37%, while
the other models had a reasonable performance. Finally, all O5 instances were correctly
Inventions 2024, 9, 6 11 of 14
predicted as O5 by Model 1, and Models 2 and 4 were very good, while Model 3 had a
relatively lower accuracy of 77%. Overall, the results suggest that all models performed
well in recognizing classes O1 and O5, while there were variations in performance across
the other classes.
Table 6. Classification report for recognizing the classes by the four final candidates.
In summary, Model 2 appears to be the most suitable choice among the four models,
as it demonstrates low test loss, high test accuracy, and no signs of overfitting. Additionally,
it achieves a relatively high F1-score, indicating its ability to accurately classify instances
across all classes.
Figure 5 displays the confusion matrices for the classification of five olive classes
using four candidate models. The columns represent the predicted values, while the rows
show the true values. Upon examining the matrices, we observed that all models achieved
a perfect classification (100%) for O1, with the exception of Model 1, which mislabeled
two instances as O2. Moving on to class O2, all models primarily confused it with O3.
Model 1 had the least confusion, while Model 2 had the most. For O3, Models 2 and 4
successfully recognized it, but Models 1 and 3 mixed up a few instances with O2 and O4.
In relation to class 4, Models 1 and 2 performed similarly, each with only three instances
confused with O3. Model 4 showed the weakest performance in classifying O4, while
Model 3 performed the best, mistakenly classifying just one instance as O3. Lastly, in the
case of O5, Model 1 achieved the highest accuracy with 100%, but Model 2 misidentified
four O5 instances as O4. Model 3 confused one O5 instance with O3 and four with O4,
while Model 4 misclassified one instance with both O3 and O4. To summarize, the models
generally demonstrated good recognition for classes O1 and O5, which can be attributed to
the minimal visual similarities compared to the other classes.
Our proposed network for detecting olive fruits ripening stage demonstrates competi-
tive performance metrics, exhibiting similar or enhanced accuracy or precision compared to
prior studies such as Guzmán et al. [9] in predicting olive maturity using specific algorithms,
and Puerto et al. [20] for classifying Veraison and visible defects of olive fruit under an
online system. The robustness of the Xception deep learning method in image-based classi-
fication tasks has been demonstrated in various applications. For instance, Pujari et al. [29]
achieved 99.01% accuracy in classifying breast histopathological images using this method.
Similarly, Wu et al. [14] applied the algorithm to classify scene images with an accuracy
of 92.32%, and Salim et al. [27] utilized the Xception model to classify fruit species with a
total accuracy of 97.73%. A comparative analysis, including the most relevant works on
the olive, is provided in Table 7, which discusses classification tasks using both traditional
and modern image analysis methods. One significant advantage of deep learning-based
image processing techniques is their independence from environmental conditions such as
lighting, background, distance to object, camera properties, etc., unlike traditional image
processing methods. Additionally, deep learning techniques excel in feature extraction,
allowing them to extract high-level features compared to low-level features like edges and
color extracted by traditional methods. This high-level feature extraction makes deep learn-
ing techniques powerful and robust tools for processing images captured in unstructured
and natural conditions, enabling their application in more challenging scenarios. Guzmán
Inventions 2024, 9, 6 12 of 14
et al. [9] used traditional image processing techniques to extract maturity indexes of olive
fruit but were unable to extract high-level features, making it unsuitable for maturity-based
classification tasks. Puerto et al. [23] applied traditional image processing techniques for
classifying different batches of olives entering the milling process but could not identify the
individual olive pieces within the same batch. The most relevant work in this context was
carried out by Khosravi et al. [17] who successfully classified on-branch olives based on
maturity stage (91.91%), although the method’s robustness for post-harvest sorting remains
unclear. Our proposed network, a modified version of Xception, demonstrates promising
performance metrics in recognizing olive fruit ripening stages based on color images. This
indicates its reliability as a tool for post-harvest sorting of olive fruits.
4. Conclusions
Olives are a vital crop with various post-harvest applications, including pickling, can-
ning, and oil production, each requiring a specific ripening stage. To address this challenge,
a reliable classification system is crucial to sort olives according to their maturity levels.
This study aimed to develop an automated deep learning model utilizing color images
to classify ‘Roghani’, an Iranian olive cultivar, into five ripening stages. We employed a
modified and fine-tuned Xception architecture, harnessing cutting-edge image processing
and deep learning techniques to effectively categorize olives. Four Xception-based models
were shortlisted and evaluated based on their performance, using metrics such as loss, ac-
curacy, classification reports, confusion matrices, and overfitting risk. While all four models
showed comparable performance, Model 1 stood out. However, considering model general-
ity and stability, Model 1 raised concerns due to substantial fluctuations in validation losses
and accuracies during training, indicating a high risk of overfitting. Model 3 boasted a
remarkable accuracy, but its reliability was compromised by its susceptibility to overfitting.
Models 2 and 4 demonstrated stable validation losses and accuracies throughout training,
rendering them superior in terms of generality and stability. Although their accuracies
were not the highest among all models, they were still satisfactory. Of the two, Model 2
is preferred owing to its lower loss value. When selecting a model, a trade-off between
classification performance and model generality must be considered. For the present study,
Model 2 emerges as the optimal choice, striking a balance between respectable classification
results and minimal risk of overfitting, suggesting that it may generalize well to unseen
data. The findings of this research constitute a significant breakthrough in olive sorting and
classification, providing a potent tool for enhancing the efficiency and precision of olive
processing and production.
Author Contributions: Conceptualization, S.I.S. and M.R.; data curation, M.R.; formal analysis and
modelling, S.I.S.; writing—original draft preparation, S.I.S. and M.R.; writing—review and editing,
S.I.S. and M.R. All authors have read and agreed to the published version of the manuscript.
Funding: No funding was received for this work.
Institutional Review Board Statement: Not applicable.
Inventions 2024, 9, 6 13 of 14
Data Availability Statement: The data supporting this study is available upon request.
Conflicts of Interest: There are no conflicts of interest to declare.
References
1. Fabbri, A.; Baldoni, L.; Caruso, T.; Famiani, F. The Olive: Botany and Production; CABI: Wallingford, Oxfordshire, UK, 2023.
2. Rezaei, M.; Rohani, A. Estimating Freezing Injury on Olive Trees: A Comparative Study of Computing Models Based on
Electrolyte Leakage and Tetrazolium Tests. Agriculture 2023, 13, 1137. [CrossRef]
3. Boskou, D. Olive Oil: Chemistry and Technology, 2nd ed.; New York AOCS Publishing: New York, NY, USA, 2006. [CrossRef]
4. Diab, M.; Ibrahim, A.; Hadad, G.B. Review article on chemical constituents and biological activity of Olea europaea. Rec. Pharm.
Biomed. Sci. 2020, 4, 36–45. [CrossRef]
5. Lazzez, A.; Perri, E.; Caravita, M.A.; Khlif, M.; Cossentini, M. Influence of Olive Maturity Stage and Geographical Origin on
Some Minor Components in Virgin Olive Oil of the Chemlali Variety. J. Agric. Food Chem. 2008, 56, 982–988. [CrossRef] [PubMed]
6. Jiménez, B.; Sánchez-Ortiz, A.; Lorenzo, M.L.; Rivas, A. Influence of fruit ripening on agronomic parameters, quality indices,
sensory attributes and phenolic compounds of Picudo olive oils. Food Res. Int. 2013, 54, 1860–1867. [CrossRef]
7. Pereira, J.A. Special issue on “Olive oil: Quality, composition and health benefits” . Food Res. Int. 2013, 54, 1859. [CrossRef]
8. Famiani, F.; Proietti, P.; Farinelli, D.; Tombesi, A. Oil Quality in Relation to Olive Ripening. Acta Hortic. 2002, 586, 671–674.
[CrossRef]
9. Guzmán, E.; Baeten, V.; Pierna, J.A.F.; García-Mesa, J.A. Determination of the olive maturity index of intact fruits using image
analysis. J. Food Sci. Technol. 2015, 52, 1462–1470. [CrossRef]
10. Bellincontro, A.; Taticchi, A.; Servili, M.; Esposto, S.; Farinelli, D.; Mencarelli, F. Feasible Application of a Portable NIR-AOTF Tool
for On-Field Prediction of Phenolic Compounds during the Ripening of Olives for Oil Production. J. Agric. Food Chem. 2012, 60,
2665–2673. [CrossRef]
11. Violino, S.; Moscovini, L.; Costa, C.; Re, P.D.; Giansante, L.; Toscano, P.; Tocci, F.; Vasta, S.; Manganiello, R.; Ortenzi, L.; et al.
Superior EVOO Quality Production: An RGB Sorting Machine for Olive Classification. Foods 2022, 11, 2917. [CrossRef]
12. Gracia, A.; León, L. Non-destructive assessment of olive fruit ripening by portable near infrared spectroscopy. Grasas Aceites 2011,
62, 268–274.
13. Salguero-Chaparro, L.; Baeten, V.; Abbas, O.; Peña-Rodríguez, F. On-line analysis of intact olive fruits by vis–NIR spectroscopy:
Optimisation of the acquisition parameters. J. Food Eng. 2012, 112, 152–157. [CrossRef]
14. Wu, X.; Liu, R.; Yang, H.; Chen, Z. An Xception Based Convolutional Neural Network for Scene Image Classification with Transfer
Learning. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA),
Guangzhou, China, 17–19 December 2020; pp. 262–267. [CrossRef]
15. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [CrossRef]
16. Saedi, S.I.; Khosravi, H. A deep neural network approach towards real-time on-branch fruit recognition for precision horticulture.
Expert Syst. Appl. 2020, 159, 113594. [CrossRef]
17. Khosravi, H.; Saedi, S.I.; Rezaei, M. Real-time recognition of on-branch olive ripening stages by a deep convolutional neural
network. Sci. Hortic. 2021, 287, 110252. [CrossRef]
18. Riquelme, M.T.; Barreiro, P.; Ruiz-Altisent, M.; Valero, C. Olive classification according to external damage using image analysis.
J. Food Eng. 2008, 87, 371–379. [CrossRef]
19. Ponce, J.M.; Aquino, A.; Andújar, J.M. Olive-Fruit Variety Classification by Means of Image Processing and Convolutional Neural
Networks. IEEE Access 2019, 7, 147629–147641. [CrossRef]
20. Aguilera Puerto, D.; Cáceres Moreno, Ó.; Martínez Gila, D.M.; Gómez Ortega, J.; Gámez García, J. Online system for the
identification and classification of olive fruits for the olive oil production process. J. Food Meas. Charact. 2019, 13, 716–727.
[CrossRef]
21. Aquino, A.; Ponce, J.M.; Andújar, J.M. Identification of olive fruit, in intensive olive orchards, by means of its morphological
structure using convolutional neural networks. Comput. Electron. Agric. 2020, 176, 105616. [CrossRef]
22. Furferi, R.; Governi, L.; Volpe, Y. ANN-based method for olive Ripening Index automatic prediction. J. Food Eng. 2010, 101,
318–328. [CrossRef]
23. Puerto, D.A.; Gila, D.M.; García, J.G.; Ortega, J.G. Sorting Olive Batches for the Milling Process Using Image Processing. Sensors
2015, 15, 15738–15754. [CrossRef]
24. Figorilli, S.; Violino, S.; Moscovini, L.; Ortenzi, L.; Salvucci, G.; Vasta, S.; Tocci, F.; Costa, C.; Toscano, P.; Pallottino, F. Olive Fruit
Selection through AI Algorithms and RGB Imaging. Foods 2022, 11, 3391. [CrossRef] [PubMed]
25. Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive
Updated Review. Sensors 2021, 21, 3758. [CrossRef] [PubMed]
26. Fan, S.; Liang, X.; Huang, W.; Jialong Zhang, V.; Pang, Q.; He, X.; Li, L.; Zhang, C. Real-time defects detection for apple sorting
using NIR cameras with pruning-based YOLOV4 network. Comput. Electron. Agric. 2022, 193, 106715. [CrossRef]
27. Salim, F.; Saeed, F.; Basurra, S.; Qasem, S.N.; Al-Hadhrami, T. DenseNet-201 and Xception Pre-Trained Deep Learning Models for
Fruit Recognition. Electronics 2023, 12, 3132. [CrossRef]
Inventions 2024, 9, 6 14 of 14
28. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [CrossRef]
29. Pujari, S.D.; Pawar, M.M.; Wadekar, M. Multi-Classification of Breast Histopathological Image Using Xception: Deep Learning
with Depthwise Separable Convolutions Model. In Techno-Societal 2020; Pawar, P.M., Balasubramaniam, R., Ronge, B.P., Salunkhe,
S.B., Vibhute, A.S., Melinamath, B., Eds.; Springer: Cham, Switzerland, 2021. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.