Applsci 14 08884
Applsci 14 08884
sciences
Systematic Review
Recent Applications of Explainable AI (XAI): A Systematic
Literature Review
Mirka Saarela 1, * and Vili Podgorelec 2
1 Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
2 Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia;
vili.podgorelec@um.si
* Correspondence: mirka.saarela@jyu.fi
Abstract: This systematic literature review employs the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable
AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web
of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being
recent, high-quality XAI application articles published in English—and were analyzed in detail.
Both qualitative and quantitative statistical techniques were used to analyze the identified articles:
qualitatively by summarizing the characteristics of the included studies based on predefined codes,
and quantitatively through statistical analysis of the data. These articles were categorized according
to their application domains, techniques, and evaluation methods. Health-related applications were
particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical
imaging. Other significant areas of application included environmental and agricultural management,
industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally,
emerging applications in law, education, and social care highlight XAI’s expanding impact. The
review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with
SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the
evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion
rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation
frameworks to ensure the reliability and effectiveness of XAI applications. Future research should
focus on developing comprehensive evaluation standards and improving the interpretability and
Citation: Saarela, M.; Podgorelec, V. stability of explanations. These advancements are essential for addressing the diverse demands of
Recent Applications of Explainable AI various application domains while ensuring trust and transparency in AI systems.
(XAI): A Systematic Literature Review.
Appl. Sci. 2024, 14, 8884. https:// Keywords: explainable artificial intelligence; applications; interpretable machine learning; convolutional
doi.org/10.3390/app14198884
neural network; deep learning; post-hoc explanations; model-agnostic explanations
Academic Editors: Douglas
O’Shaughnessy and Pedro Couto
on XAI in general [1,2,5], there is a noticeable gap when it comes to in-depth analyses
focused specifically on XAI applications. Existing reviews predominantly explore founda-
tional concepts and theoretical advancements, but only a few concentrate on how XAI is
being applied across different domains. Although a few reviews on XAI applications do
exist [6–8], they have limitations in terms of the coverage period and the number of articles
reviewed. For instance, Hu et al. [6] published their review in 2021, thus excluding any
articles published thereafter. Additionally, they do not specify the total number of articles
reviewed, and their reference list includes only 70 articles. Similarly, Islam et al. [7] and
Saranya and Subhashini [8] reviewed 137 and 91 articles, respectively, but also focused on
earlier periods, leaving a gap in the literature regarding the latest XAI applications.
In contrast, our review fills this gap by providing a more comprehensive and up-
to-date synthesis of XAI applications, analyzing a significantly larger set of 512 recent
articles. Each article was thoroughly reviewed and categorized according to predefined
codes, enabling a systematic and detailed examination of current trends and developments
in XAI applications. This broader scope not only captures the latest advancements but also
offers a more thorough and nuanced overview than previous reviews, making it a valuable
resource for understanding the current landscape of XAI applications.
Given the rapid advancements and diverse applications of XAI, our research focuses
on addressing the following key questions:
• Domains: what are the most common domains of recent XAI applications, and what
are emerging XAI domains?
• Techniques: Which XAI techniques are utilized? How do these techniques vary based
on the type of data used, and in what forms are the explanations presented?
• Evaluation: How is explainability measured? Are specific metrics or evaluation
methods employed?
The remainder of this review is structured as follows: In Section 2, we provide a brief
overview of XAI taxonomies. Section 3 details the process used to identify relevant recent
XAI application articles, along with our coding and review procedures. Section 4 presents
the findings, highlighting the most common and emerging XAI application domains,
the techniques employed based on data type, and a summary of how the different XAI
explanations were evaluated. Finally, in Section 5, we discuss our findings in the context of
our research questions and suggest directions for future research.
model-agnostic (see Figure 1). Ante-hoc/intrinsic XAI methods encompass techniques that
are inherently transparent, often due to their simplistic structures, such as linear regres-
sion models. Conversely, post-hoc methods elucidate a model’s reasoning retrospectively,
following its training phase [5,26,30]. Moreover, distinctions are made between local and
global explanations: while modular global explanations provide an overarching interpre-
tation of the entire model, addressing it comprehensively, local explanations elucidate
specific observations, such as individual images [31,32]. Furthermore, explanation tech-
niques may be categorized as model-specific, relying on aspects of the particular model,
or model-agnostic, applicable across diverse models [5,33]. Model-agnostic techniques
can be further categorized into perturbation- or occlusion-based versus gradient-based.
Techniques like occlusion- or perturbation-based methods manipulate sections of input
features or images to generate explanations, while gradient-based methods compute the
gradient of prediction (or classification score) concerning input features [34].
XAI
Evaluation
Figure 1. Overview of different XAI approaches and evaluation methods. These categories were used
to classify the XAI application papers reviewed in this study.
As with machine learning models themselves, there is no universally best XAI ap-
proach; the optimal technique depends on factors such as the nature of the data, the specific
application, and the characteristics of the underlying AI model. For instance, local ex-
planations are particularly useful when seeking insights into specific instances, such as
identifying the reasons behind false positives in a model’s predictions [35]. In cases where
the AI model is inherently complex, post-hoc techniques may be necessary to provide
explanations, with some methods, like those relying on gradients, being applicable only
to specific models, such as neural networks with differentiable layers [34,36]. While a
variety of XAI methods are available, evaluating their effectiveness remains a less-explored
area [4,11]. As illustrated in Figure 1, XAI evaluation approaches can be categorized into
consultations with human experts, anecdotal evidence, and quantitative metrics.
As explained above, our review extends existing work on XAI methods and tax-
onomies [5,9–11] by shifting the focus towards the practical applications of XAI across
various domains. In the next section, we will describe how we used the categorizations in
Figure 1 to classify the recent XAI application papers in our review.
3. Research Methodology
Based on the research questions posed in Section 1 and the different taxonomies of
XAI described in Section 2, we initiated our systematic review on recent applications of
XAI. To collect the relevant publications for this review, we followed the analytical protocol
of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA)
Appl. Sci. 2024, 14, 8884 4 of 111
guidelines [37]. A systematic review “is a review of a clearly formulated question that uses
systematic and explicit methods to identify, select, and critically appraise relevant research,
and to collect and analyze data from the studies that are included in the review” [12].
According to the PRISMA guidelines, our evaluation consisted of several stages: defining
eligibility criteria, defining information sources, presenting the search strategy, specifying
the selection process, data collection process, data item selection, studying the risk of bias
assessment, specifying effect measures, describing the synthesis methods, reporting bias,
and certainty assessment [37].
Information sources and search strategy: The search was conducted in February 2024
on Web of Science (WoS) by using the following Boolean search string on the paper topic
(note that searches for topic terms in WoS search the following fields within a record: Title,
Abstract, Author Keywords, Keywords Plus): TS = ((“explainable artificial intelligence”
OR XAI) AND (application* OR process*)). The asterisk (*) at the end of a keyword ensures
the inclusion of the term in both singular and plural forms and its derivatives. The search
was limited to English-language non-review articles published between 1 January 2021 and
20 February 2024 (the search results can be found here: https://www.webofscience.com/
wos/woscc/summary/495b659d-8f9e-4b77-8671-2fac26682231-cda1ce8b/relevance/1, ac-
cessed on 24 September 2024). We exclusively used WoS due to its authoritative status and
comprehensive coverage. Birkle et al. (2020) [38] highlight WoS as the world’s oldest and
most widely used research database, ensuring reliable and high-quality data. Its extensive
discipline coverage and advanced citation indexing make it ideal for identifying influential
works and mapping research trends [38].
Eligibility criteria and selection process: The literature selection process flow chart is
summarized in Figure 2. The database search produced 664 papers. After removing non-
English articles (n = 4), 660 were eligible for the full-text review and screening. During
the full-text screening, we implemented the inclusion and exclusion criteria (Table 1)
established through iterative discussions among the two authors. The reviewers assessed
each article under the inclusion and exclusion criteria, with 512 research articles meeting
the inclusion criteria and being incorporated into the evaluation procedure.
Table 1. Inclusion and exclusion criteria for the review of recent applications of XAI.
As reported in Figure 2, five articles were not retrievable from our universities’ net-
works, and 143 were excluded because they did not meet our inclusion criteria (primarily
because they introduced general XAI taxonomies or new methods without describing
specific XAI applications). Consequently, 512 articles remained for data extraction and
synthesis. For reasons of reproducibility, the entire list of included articles is attached
Appl. Sci. 2024, 14, 8884 5 of 111
in Table A1, along with the XAI application and the reason(s) why the authors say that
explainability is essential in their domain.
Data collection process, data items, study risk of bias assessment, effect measures, synthesis
methods, and reporting bias assessment: To categorize and summarize the included articles in
this review, the first author developed a Google Survey that was filled out for each selected
article. The survey included both categorical (multiple-choice) and open-ended questions
designed to systematically categorize the key aspects of the research. This approach ensured
a consistent and comprehensive analysis across all articles. The survey provided an Excel
file with all responses, simplifying the analysis process.
Each reviewer assessed their allocated articles using the predefined codes and survey
questions created by the first author. In cases of uncertainty regarding the classification
of an article, reviewers noted the ambiguity, and these articles, along with their tentative
classifications, were discussed collectively among both authors to reach a consensus. This
discussion was conducted in an unbiased manner to ensure accurate classifications. While
no automated tools were used for the review process, Python libraries were employed for
quantitative assessment.
Some of the developed codes (survey questions) were as follows:
• What was the main application domain, and what was the specific application?
• In what form (such as rules, feature importance, counterfactual) was the explana-
tion created?
• Did the authors use intrinsically explainable models or post-hoc explainability, and did
they focus on global or local explanations?
• How was the quality of the explanation(s) evaluated?
• What did the authors say about why the explainability of their specific application is
important? (Open-ended question.)
Appl. Sci. 2024, 14, 8884 6 of 111
After completing the coding process and filling out the survey for each included article,
we synthesized the data using both qualitative and quantitative techniques to address our
research questions [39]. Qualitatively, we summarized the characteristics of the included
studies based on the predefined codes. Quantitatively, we performed statistical analysis of
the data, utilizing Python 3.11.5 to extract statistics from the annotated Excel table. This
combination of qualitative and quantitative approaches, along with collaborative efforts,
ensured the reliability and accuracy of our review process.
To assess the risk of reporting bias, we examined the completeness and transparency of
the data reported in each article, focusing on the availability of results related to our prede-
fined research questions. Articles that lacked essential data or failed to report key outcomes
were flagged for potential bias, and this was considered during the certainty assessment.
Certainty assessment: Regarding the quality of the articles, potential bias, and the cer-
tainty of their evidence, we followed the general recommendations [40] and included only
articles for which at least seven out of the ten quality questions proposed by Kitchenham
and Charters (2007) [39] could be answered affirmatively. Additionally, we ensured qual-
ity by selecting only articles published in prestigious journals that adhere to established
academic standards, such as being peer-reviewed and having an international editorial
board [35].
Table 2 reports the number of publications per journal for the ten journals with the
highest publication counts in our sample. As shown in the table, IEEE Access has the
highest number of publications, totaling 45, which represents 8.79% of our sample of
articles on recent XAI applications. It is followed by this journal (Applied Sciences-Basel)
with 37 publications (7.23%) and Sensors with 28 publications (5.47%).
Table 2. Number of publications for the ten journals with the highest publication counts in our
sample of articles on recent XAI applications.
Journal # of Publications
IEEE Access 45
Applied Sciences-Basel 37
Sensors 28
Scientific Reports 15
Electronics 14
Remote Sensing 8
Diagnostics 7
Information 7
Machine Learning And Knowledge Extraction 7
Sustainability 7
4. Results
In this section, we present the results of the 512 recent XAI application articles that
met our inclusion and quality criteria. As detailed in Section 3, we included only those
articles that satisfied our rigorous standards and were not flagged for bias. Once the articles
passed our inclusion criteria and were coded and analyzed, we did not conduct further
assessments of potential bias within the study results themselves. Our analysis relied on
quantitative summary statistics and qualitative summaries derived from these high-quality
articles. The complete list of these articles is provided in Table A1, along with their specific
XAI applications and the authors’ justifications for the importance of explainability in
their respective domains. Next, we provide an overview of recent XAI applications by
summarizing the findings from these 512 included articles.
Appl. Sci. 2024, 14, 8884 7 of 111
Figure 3. Main XAI application domain of the studies in our corpus (including all the main domains
mentioned in at least three papers).
Medical imaging and diagnostic applications are also prominent, including detecting
paratuberculosis from histopathological images [63], predicting coronary artery disease
from myocardial perfusion images [64], diagnosis and surgery [65], identifying reasons for
MRI scans in multiple sclerosis patients [66], detecting the health status of neonates [67],
spinal postures [68], and chronic wound classification [69]. Additionally, studies have
focused on age-related macular degeneration detection [70], predicting immunological
age [71], cognitive health assessment [72,73], cardiovascular medicine [74,75], glaucoma
prediction and diagnosis [76–78], as well as predicting diabetes [79–82] and classifying
arrhythmia [83,84].
General management applications in healthcare include predicting patient outcomes
in ICU [60], functional work ability prediction [85], a decision support system for nutrition-
related geriatric syndromes [86], predicting hospital admissions for cancer patients [87],
medical data management [88], medical text processing [89], ML model development in
medicine [90], pain recognition [91], drug response prediction [92,93], face mask detec-
tion [94], and studying the sustainability of smart technology applications in healthcare [95].
Lastly, studies about tracing food behaviors [96], aspiration detection in flexible endoscopic
evaluation of swallowing [97], human activity recognition [98], human lower limb activity
recognition [99], factors influencing hearing aid use [100], predicting chronic obstructive
pulmonary disease [101], and assessing developmental status in children [102] underline
the diverse use of XAI in the health domain.
It is also noteworthy that brain and neuroscience studies have frequently been the
main application (Figure 3), often related to health. For example, Alzheimer’s disease clas-
sification and prediction have been major areas of focus [103–109], and Parkinson’s disease
diagnosis has been extensively studied [110–113]. There is also significant research on brain
tumor diagnosis and localization [114–118], predicting brain hemorrhage [119], cognitive
neuroscience development [120], and detecting and explaining autism spectrum disor-
der [121]. Other notable brain studies include the detection of epileptic seizures [122,123],
Appl. Sci. 2024, 14, 8884 8 of 111
predicting the risk of brain metastases in patients with lung cancer [124], and automating
skull stripping from brain magnetic resonance images [125]. Similarly, three pharmacy stud-
ies are related to health, including metabolic stability and CYP inhibition prediction [126]
and drug repurposing [127,128].
In the field of environmental and agricultural applications, various studies have uti-
lized XAI techniques for a wide range of purposes. For instance, earthquake-related studies
have focused on predicting an earthquake [129] and assessing the spatial probability of
earthquake impacts [130]. In the area of water resources and climate analysis, research has
been conducted on groundwater quality monitoring [131], predicting ocean circulation
regimes [132], water resources management through snowmelt-driven streamflow predic-
tion [133], and analyzing the impact of land cover changes on climate [134]. Additionally,
studies have addressed predicting spatiotemporal distributions of lake surface temperature
in the Great Lakes [135] and soil moisture prediction [136]. Environmental monitoring
and resource management applications also include predicting heavy metals in ground-
water [137], detection and quantification of isotopes using gamma-ray spectroscopy [138],
and recognizing bark beetle-infested forest areas [139]. Agricultural applications have
similarly leveraged XAI techniques for plant breeding [140], disease detection in agricul-
ture [141], diagnosis of plant stress [142], prediction of nitrogen requirements in rice [143],
grape leaf disease identification [144], and plant genomics [145].
Urban and industrial applications are also prominent, with studies on urban growth
modeling and prediction [146], building energy performance benchmarking [147], and opti-
mization of membraneless microfluidic fuel cells for energy production [148]. Furthermore,
predicting product gas composition and total gas yield [149], wastewater treatment [150],
and the prediction of undesirable events in oil wells [151] have been significant areas of
research. Lastly, environmental studies have also focused on predicting drought conditions
in the Canadian prairies [152].
In the manufacturing sector, XAI techniques have been employed for a variety of
predictive and diagnostic tasks. For instance, research has focused on prognostic lifetime
estimation of turbofan engines [153], fault prediction in 3D printers [154], and modeling
hydrocyclone performance [155]. Moreover, the prediction and monitoring of various
manufacturing processes have seen substantial research efforts. These include predictive
process monitoring [156,157], average surface roughness prediction in smart grinding pro-
cesses [158], and predictive maintenance in manufacturing systems [159]. Additionally,
modeling refrigeration system performance [160] and thermal management in manufac-
turing processes [161] have been explored. Concrete-related studies include predicting
the strength characteristics of concrete [162] and the identification of concrete cracks [163].
In the realm of industrial optimization and fault diagnosis, research has addressed the
intelligent system fault diagnosis of the robotic strain wave gear reducer [164] and the
optimization of injection molding processes [165]. The prediction of pentane content [166]
and the hot rolling process in the steel industry [167] have also been areas of focus. Studies
have further examined job cycle time [168] and yield prediction [169].
In the realm of security and defense, XAI techniques have been widely applied to
enhance cybersecurity measures. Several studies have focused on intrusion detection
systems [170–172], as well as trust management within these systems [173]. Research has
also explored detecting vulnerabilities in source code [174]. Cybersecurity applications
include general cybersecurity measures [175], the use of XAI methods in cybersecurity [176],
and specific studies on malware detection [177]. In the context of facial and voice recog-
nition and verification, XAI techniques have been employed for face verification [178]
and deepfake voice detection [179]. Additionally, research has addressed attacking ML
classifiers in EEG signal-based human emotion assessment systems using data poisoning
attacks [180]. Emerging security concerns in smart cities have led to studies on attack detec-
tion in IoT infrastructures [181]. Furthermore, aircraft detection from synthetic aperture
radar (SAR) imagery has been a significant area of research [182]. Social media monitoring
Appl. Sci. 2024, 14, 8884 9 of 111
for xenophobic content detection [183] and the broader applications of intrusion detection
and cybersecurity [184] highlight the diverse use of XAI in this domain.
In the finance sector, XAI techniques have been employed to enhance various decision-
making processes. Research has focused on decision-making in banking and finance sector
applications [185], asset pricing [186], and predicting credit card fraud [187]. Studies have
also aimed at predicting decisions to approve or reject loans [188] and addressing a range of
credit-related problems, including fraud detection, risk assessment, investment decisions,
algorithmic trading, and other financial decision-making processes [189]. Credit risk assess-
ment has been a significant area of research, with studies on credit risk assessment [190],
predicting loan defaults [191], and credit risk estimation [192,193]. The prediction and
recognition of financial crisis roots have been explored [194], alongside risk management
in insurance savings products [195]. Furthermore, time series forecasting and anomaly
detection have been important areas of study [196].
XAI has also been used for transportation and self-driving car applications, such as
the safety of self-driving cars [197], marine autonomous surface vehicle engineering [198],
autonomous vehicles for object detection and networking [199,200], and the development of
advanced driver-assistance systems [201]. Similarly, XAI offered support in retail and sales,
such as inventory management [202], on-shelf availability monitoring [203], predicting
online purchases based on information about online behavior [204], customer journey
mapping automation [205], and churn prediction [206,207].
In the field of education, XAI has been applied to various areas such as the early predic-
tion of student performance [208], predicting dropout rates in engineering faculties [209],
forecasting alumni income [210], and analyzing student agency [211]. In psychology, XAI
was used for classifying psychological traits from digital footprints [212]; in social care,
for child welfare screening [213]; and in the laws, for detecting reasons behind a judge’s
decision-making process [214], predicting withdrawal from the legal process in cases of vio-
lence towards women in intimate relationships [215], and inter partes institution outcomes
predictions [216]. In natural language processing, XAI was used for explaining sentence
embedding [217], question classification [218], questions answering [219], sarcasm detec-
tion in dialogues [220], identifying emotions from speech [221], assessment of familiarity
ratings for domain concepts [222], and detecting AI-generated text [223].
In entertainment, XAI was used, for example, for movie recommendations [224], ex-
plaining art [225], and different gaming applications, including analyzing and optimizing the
performance of agents in a game [226], deep Q-learning experience replay [227], and cheating
detection and player churn prediction [228]. Furthermore, several studies concentrated on
(social) media deceptive online content (such as fake news and deepfake images) detec-
tion [229–234]. In summary, the recent applications of XAI span a diverse array of domains,
reflecting its evolving scope; Figure 4 illustrates eight notable application areas.
Figure 4. Saliency maps of eight diverse recent XAI applications from various domains: brain
tumor classification [116], grape leaf disease identification [144], emotion detection [235], ripe status
recognition [141], volcanic localizations [236], traffic sign classification [237], cell segmentation [238],
and glaucoma diagnosis [77] (from top to bottom and left to right).
Appl. Sci. 2024, 14, 8884 10 of 111
Figure 5. Number of papers in our corpus that used global versus local explanations.
Figure 6. Most common explanation techniques used in the papers in our corpus (only XAI techniques
used in at least five papers are shown).
based ML models, which, similarly to linear and logistic regression, can be characterized as
the most transparent and inherently interpretable models, it suggests that the users in the
financial domain are especially keen on getting insights and explanations on how the ML
models operate on their data.
neural network
tree-based model
support vector machine
linear/logistic regression
K nearest neighbor
ML Model
Bayesian
Gaussian
fuzzy logic-based
some optimization
graph-based
other
0 50 100 150 200 250 300
Count
Figure 7. Mostly used ML models in the papers in our corpus (only ML models used at least five
times are shown).
classification
regression
ML Task
clustering
reinforcement learning
other
both
46 (9.0%)
post-hoc 403 (78.7%)
63 (12.3%)
intrinsically
explainable
Figure 9. Number of papers in our corpus that used a post-hoc approach versus intrinsically
explainable ML model.
tree-based
Intrinsically Explainable
deep NN
ML Model
linear/logistic
regression
Bayesian
other/specific
0 5 10 15 20 25
Count
Figure 10. Number of papers that used a specific ML model, which is presented as intrinsically
explainable.
ment in insurance [195], financial crisis prediction [194], investment decisions and algo-
rithmic trading [189], and asset pricing [186]) and even 9% of linear or logistic regression
applications (used primarily for credit risk assessment [190] and prediction [193], as well
as financial decision-making processes [189]). While post-hoc explainability methods, pri-
marily SHAP and LIME, are the most favored in the financial sector [189], intrinsically
explainable modes are gaining popularity for revealing the insights and are being used
for stock market analysis [278] and forecasting [279], profit optimization, and predicting
loan defaults [191]. Education represents 2% of all applications of tree-based ML models
(including early prediction of student performance [208], predicting student dropout [209],
and advanced learning analytics [211]) and 4% of linear or logistic regression models (such
as pedagogical decision-making [211] and prediction of post-graduate success and alumni
income [210]), while (deep) neural networks are used for comparison with other methods
in only two of all the reviewed XAI papers concerning education [209,211].
The findings of the papers using regression as their main task, which used some metric to
evaluate the explanations, underscore the critical role of explainability techniques like Shapley
and Grad-CAM in enhancing model interpretability and accuracy (e.g., [157,291]) across vari-
ous domains, from wind turbine anomaly detection [244] to credit card fraud prediction [187].
While global scores aid in feature selection, semi-local analyses offer more meaningful in-
sights [292]. XAI methods revealed system-level insights and emergent properties [293],
though challenges like inconsistency, instability, and complexity persist [157,294]. User studies
and model retraining confirmed the practical benefits of improved explanations [213,295].
However, the authors mentioned that the explainability of their results was limited by the
lack of suitable metrics for evaluating the explainability of algorithms [294].
Finally, for the most frequent ML task of classification, the analysis of the papers,
which used some metrics to evaluate their explainability results, emphasizes the impor-
tance of explainability in enhancing model transparency, robustness, and decision-making
accuracy across various applications, from object detection from SAR images [182] and
hate speech detection [296] to classification of skin cancer [32] and cyber threats [297]. Tech-
niques like SHAP, LIME, and Grad-CAM provided insights into feature importance and
model behavior (e.g., [124,298,299]). In some situations, the adopted XAI methods showed
improved performance and more meaningful explanations, aiding in tasks like malware
detection [177], diabetes prediction [82], extracting concepts [298], and remote sensing [300].
Evaluations confirmed that aligning explanations with human expectations and ensuring
local and global consistency are key to improving the effectiveness and trustworthiness
of AI systems [235]. The authors concluded that while explanation techniques showed
promise, there is still a long way to go before automatic systems can be reliably used in
practice [32], and widely adopted XAI metrics can help here a lot.
In summary, the results reveal distinct preferences and practices in using XAI. Tree-
based models, commonly used in health applications, employ various explanation forms
like feature importance, rules, and visualization, while deep neural networks primarily
utilize visualization. Linear and logistic regression models favor feature importance.
In finance and education, tree-based and regression models are more prevalent than deep
neural networks. However, despite the widespread application of XAI methods, evaluation
practices remain underdeveloped. Over half of the studies did not assess the quality of
their explanations, with only a minority using quantitative metrics. There is a need for
standardized evaluation metrics to improve the reliability and effectiveness of XAI systems.
domains, utilized techniques, and evaluation methods. The findings indicate a domi-
nant trend in health-related applications, particularly in cancer prediction and diagnosis,
COVID-19 management, and various other medical imaging and diagnostic uses. Other
significant domains include environmental and agricultural applications, urban and indus-
trial optimization, manufacturing, security and defense, finance, transportation, education,
psychology, social care, law, natural language processing, and entertainment.
In health, XAI has been extensively applied to areas such as cancer detection, brain
and neuroscience studies, and general healthcare management. Environmental applications
span earthquake prediction, water resources management, and climate analysis. Urban
and industrial applications focus on energy performance, waste treatment, and manufac-
turing processes. In security, XAI techniques enhance cybersecurity and intrusion detection.
Financial applications improve decision-making processes in banking and asset manage-
ment. Transportation studies leverage XAI for autonomous vehicles and marine navigation.
The review also highlights emerging XAI applications in education for predicting student
performance and in social care for child welfare screening.
In categorizing recent XAI applications, we aimed to identify and highlight the most
significant overarching themes within the literature. While some categories, such as “health”,
are clearly defined and widely recognized within the research community, others, like “in-
dustry” and “technology”, are broader and less distinct. The latter categories encompass a
diverse range of applications, reflecting the varied contexts in which XAI methods are em-
ployed across different sectors. This categorization approach, though occasionally less precise,
captures the most critical global trends in XAI research. It acknowledges the interdisciplinary
nature of the field, where specific categories may overlap or lack the specificity found in
others. Despite these challenges, our goal was to provide a comprehensive overview that
highlights the most prominent domains where XAI is being applied while recognizing that
some categories, by their nature, are more general and encompass a wider array of subfields.
By far the most frequent ML task among the reviewed XAI papers is classification,
followed by regression and clustering. Among the used ML models, deep neural networks
are predominant, especially convolutional neural networks. The second most used group of
ML models are tree-based models (decision and regression trees, random forest, and other
types of tree ensembles). Interestingly, there is no substantial difference between the major
ML models with regard to the ML task of their target application.
Feature importance, referring to techniques that assign a score to input features based on
how useful they are at predicting a target variable [26], is the most common form of explana-
tion among the reviewed XAI papers. Some sort of visualization, trying to visually represent
the (hidden) knowledge of a ML model [301], is used very often as well. Other commonly
used forms of explanation include the use of saliency maps, rules, and counterfactuals.
Regarding methods, local explanations are predominant, with SHAP and LIME being
the most commonly used techniques. SHAP is preferred for its stability and mathematical
guarantees [240], while LIME is noted for its model-agnostic nature but criticized for its
instability [32]. Gradient-based techniques such as Grad-CAM, Grad-CAM++, SmoothGrad,
LRP, and Integrated Gradients are frequently used for image and complex data [179,182].
In general, post-hoc explainability is much more frequent than the use of some intrinsically
explainable ML model. However, only a few studies quantitatively measure the quality of
XAI results, with most relying on anecdotal evidence or expert evaluation [4].
In conclusion, the recent surge in XAI applications across diverse domains underscores
its growing importance in providing transparency and interpretability to AI models [4,5].
Health-related applications, particularly in oncology and medical diagnostics, dominate
the landscape, reflecting the critical need for explainable and trustworthy AI in sensitive
and high-stakes areas. The review also reveals significant research efforts in environmen-
tal management, industrial optimization, cybersecurity, and finance, demonstrating the
versatile utility of XAI techniques.
Despite the widespread adoption of XAI, there is a notable gap in the evaluation of
explanation quality. The analysis of how the authors evaluate the quality of their XAI
Appl. Sci. 2024, 14, 8884 17 of 111
approaches and results revealed that in the majority of studies, the authors still do not
evaluate the quality of their explanations or simply rely on subjective or anecdotal methods,
with only a few employing rigorous quantitative metrics [284]. Cooperation with domain
experts and including users can greatly contribute to the practical usefulness of the results,
but above all, more attention needs to be paid to the development and use of well-defined
and generally adopted metrics for evaluating the quality of explanations. It turns out that
in such a case, we can expect reliable, interpretable, and meaningful explanations with a
significantly higher degree of confidence. There is an urgent need for standardized evaluation
frameworks to ensure the reliability and effectiveness of XAI methods, as well as to improve
the interpretability and stability of explanations. The development of such metrics could
mitigate risks like confirmation bias and enhance the overall robustness of XAI applications.
field of XAI is rapidly evolving. During the course of conducting and writing this review,
numerous additional relevant articles emerged that could not be incorporated due to time
constraints. This underscores the dynamic and ongoing nature of research in this area.
Author Contributions: Conceptualization, M.S.; methodology, M.S.; validation, M.S. and V.P.; formal
analysis, M.S. and V.P.; investigation, M.S. and V.P.; resources, M.S.; data curation, M.S. and V.P.;
writing—original draft preparation, M.S. and V.P.; writing—review and editing, M.S. and V.P. All
authors have read and agreed to the published version of the manuscript.
Funding: The work by M.S. was supported by the K.H. Renlund Foundation and the Academy of
Finland (project no. 356314). The work by V.P. was supported by the Slovenian Research Agency
(Research Core Funding No. P2-0057).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The review was not registered; however, the dataset created during the
full-text review, including predefined codes and protocol details, is available from the first author
upon request.
Acknowledgments: The authors (M.S. and V.P.) would like to thank Lilia Georgieva for serving with
them as a guest editor of the special issue on “Recent Application of XAI” that initiated this review.
Conflicts of Interest: The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
Table A1. Included articles in our corpus of recent applications of XAI articles, their application, and
the reasons why the authors argue that explainability is important in their application.
References
1. Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018,
6, 52138–52160. [CrossRef]
2. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022,
55, 3503–3568. [CrossRef]
3. Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based
Syst. 2023, 263, 110273. [CrossRef]
4. Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; van Keulen, M.; Seifert, C. From anecdotal
evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv. 2023, 55, 295.
[CrossRef]
Appl. Sci. 2024, 14, 8884 91 of 111
5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins,
R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf.
Fusion 2020, 58, 82–115. [CrossRef]
6. Hu, Z.F.; Kuflik, T.; Mocanu, I.G.; Najafian, S.; Shulner Tal, A. Recent studies of xai-review. In Proceedings of the Adjunct 29th
ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 421–431.
7. Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A systematic review of explainable artificial intelligence in terms of different
application domains and tasks. Appl. Sci. 2022, 12, 1353. [CrossRef]
8. Saranya, A.; Subhashini, R. A systematic review of Explainable Artificial Intelligence models and applications: Recent develop-
ments and future trends. Decis. Anal. J. 2023, 7, 100230.
9. Schwalbe, G.; Finzel, B. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on
methods and concepts. Data Min. Knowl. Discov. 2024, 38, 3043–3101. [CrossRef]
10. Speith, T. A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference
on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2239–2250.
11. Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021,
76, 89–106. [CrossRef]
12. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and
meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [CrossRef]
13. Samek, W.; Montavon, G.; Vedaldi, A.; Hansen, L.K.; Müller, K.R. Explainable AI: Interpreting, Explaining and Visualizing Deep
Learning; Springer Nature: Berlin/Heidelberg, Germany, 2019; Volume 11700.
14. Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference
on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; Volume 70.
15. Yeh, C.K.; Kim, J.; Yen, I.E.H.; Ravikumar, P.K. Representer point selection for explaining deep neural networks. Adv. Neural Inf.
Process. Syst. 2018, 31.
16. Li, O.; Liu, H.; Chen, C.; Rudin, C. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that
Explains Its Predictions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, Orleans, LA, USA, 2–7
February 2018.
17. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and
the GDPR. Harv. J. Law Technol. 2017, 31, 841. [CrossRef]
18. Erhan, D.; Bengio, Y.; Courville, A.; Vincent, P. Visualizing higher-layer features of a deep network. Univ. Montr. 2009, 1341.
19. Towell, G.G.; Shavlik, J.W. Extracting refined rules from knowledge-based neural networks. Mach Learn 1993, 13, 71–101.
[CrossRef]
20. Castro, J.L.; Mantas, C.J.; Benitez, J.M. Interpretation of artificial neural networks by means of fuzzy rules. IEEE Trans. Neural
Netw. 2002, 13, 101–116. [CrossRef]
21. Mitra, S.; Hayashi, Y. Neuro-fuzzy rule generation: Survey in soft computing framework. IEEE Trans. Neural Netw. 2000, 11,
748–768. [CrossRef]
22. Fisher, A.; Rudin, C.; Dominici, F. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an
Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 2019, 20, 1–81.
23. Fong, R.C.; Vedaldi, A. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE
International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
24. Zintgraf, L.M.; Cohen, T.S.; Adel, T.; Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. In
Proceedings of the International Conference on Learning Representations, ICLR, Toulon, France, 24–26 April 2017; pp. 1–12.
25. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on
Computer Vision, Zurich, Switzerland, 6–12 September 2014.
26. Saarela, M.; Jauhiainen, S. Comparison of feature importance measures as explanations for classification models. SN Appl. Sci.
2021, 3, 272. [CrossRef]
27. Wojtas, M.; Chen, K. Feature Importance Ranking for Deep Learning. In Proceedings of the Advances in Neural Information
Processing Systems (NIPS 2020), Vancouver, BC, Canada, 6–12 December 2020; Volume 33, pp. 5105–5114.
28. Burkart, N.; Huber, M.F. A Survey on the Explainability of Supervised Machine Learning. J. Artif. Intell. Res. 2021, 70, 245–317.
[CrossRef]
29. Saarela, M. On the relation of causality-versus correlation-based feature selection on model fairness. In Proceedings of the 39th
ACM/SIGAPP Symposium on Applied Computing, Avila, Spain, 8–12 April 2024; pp. 56–64.
30. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box
models. ACM Comput. Surv. (CSUR) 2018, 51, 93. [CrossRef]
31. Molnar, C. Interpretable Machine Learning; Lulu. com: Morrisville, NC, USA, 2020.
32. Saarela, M.; Geogieva, L. Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model. Appl.
Sci. 2022, 12, 9545. [CrossRef]
33. Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019,
8, 832. [CrossRef]
Appl. Sci. 2024, 14, 8884 92 of 111
34. Wang, Y.; Zhang, T.; Guo, X.; Shen, Z. Gradient based Feature Attribution in Explainable AI: A Technical Review. arXiv 2024,
arXiv:2403.10415.
35. Saarela, M.; Kärkkäinen, T. Can we automate expert-based journal rankings? Analysis of the Finnish publication indicator. J. Inf.
2020, 14, 101008. [CrossRef]
36. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of
methods and applications. Proc. IEEE 2021, 109, 247–278. [CrossRef]
37. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.;
Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021,
88, 105906. [CrossRef]
38. Birkle, C.; Pendlebury, D.A.; Schnell, J.; Adams, J. Web of Science as a data source for research on scientific and scholarly activity.
Quant. Sci. Stud. 2020, 1, 363–376. [CrossRef]
39. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report,
EBSE-2007-01; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK, 2007.
40. Da’u, A.; Salim, N. Recommendation system based on deep learning methods: A systematic review and new directions. Artif.
Intell. Rev. 2020, 53, 2709–2748. [CrossRef]
41. Mridha, K.; Uddin, M.M.; Shin, J.; Khadka, S.; Mridha, M.F. An Interpretable Skin Cancer Classification Using Optimized
Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [CrossRef]
42. Carrieri, A.P.; Haiminen, N.; Maudsley-Barton, S.; Gardiner, L.J.; Murphy, B.; Mayes, A.E.; Paterson, S.; Grimshaw, S.; Winn, M.;
Shand, C.; et al. Explainable AI reveals changes in skin microbiome composition linked to phenotypic differences. Sci. Rep. 2021,
11, 4565. [CrossRef]
43. Maouche, I.; Terrissa, L.S.; Benmohammed, K.; Zerhouni, N. An Explainable AI Approach for Breast Cancer Metastasis Prediction
Based on Clinicopathological Data. IEEE Trans. Biomed. Eng. 2023, 70, 3321–3329. [CrossRef] [PubMed]
44. Yagin, B.; Yagin, F.H.; Colak, C.; Inceoglu, F.; Kadry, S.; Kim, J. Cancer Metastasis Prediction and Genomic Biomarker Identification
through Machine Learning and eXplainable Artificial Intelligence in Breast Cancer Research. Diagnostics 2023, 13, 3314. [CrossRef]
45. Kaplun, D.; Krasichkov, A.; Chetyrbok, P.; Oleinikov, N.; Garg, A.; Pannu, H.S. Cancer Cell Profiling Using Image Moments and
Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database.
Mathematics 2021, 9, 2616. [CrossRef]
46. Kwong, J.C.C.; Khondker, A.; Tran, C.; Evans, E.; Cozma, I.A.; Javidan, A.; Ali, A.; Jamal, M.; Short, T.; Papanikolaou, F.;
et al. Explainable artificial intelligence to predict the risk of side-specific extraprostatic extension in pre-prostatectomy patients.
Cuaj-Can. Urol. Assoc. J. 2022, 16, 213–221. [CrossRef] [PubMed]
47. Ramirez-Mena, A.; Andres-Leon, E.; Alvarez-Cubero, M.J.; Anguita-Ruiz, A.; Martinez-Gonzalez, L.J.; Alcala-Fdez, J. Explainable
artificial intelligence to predict and identify prostate cancer tissue by gene expression. Comput. Methods Programs Biomed. 2023,
240, 107719. [CrossRef]
48. Anjara, S.G.; Janik, A.; Dunford-Stenger, A.; Mc Kenzie, K.; Collazo-Lorduy, A.; Torrente, M.; Costabello, L.; Provencio, M.
Examining explainable clinical decision support systems with think aloud protocols. PLoS ONE 2023, 18, e0291443. [CrossRef]
49. Wani, N.A.; Kumar, R.; Bedi, J. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using
explainable artificial intelligence. Comput. Methods Programs Biomed. 2024, 243, 107879. [CrossRef]
50. Laios, A.; Kalampokis, E.; Mamalis, M.E.; Tarabanis, C.; Nugent, D.; Thangavelu, A.; Theophilou, G.; De Jong, D. RoBERTa-
Assisted Outcome Prediction in Ovarian Cancer Cytoreductive Surgery Using Operative Notes. Cancer Control. 2023, 30,
10732748231209892. [CrossRef]
51. Laios, A.; Kalampokis, E.; Johnson, R.; Munot, S.; Thangavelu, A.; Hutson, R.; Broadhead, T.; Theophilou, G.; Leach, C.; Nugent,
D.; et al. Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer.
Cancers 2022, 14, 3447. [CrossRef]
52. Ghnemat, R.; Alodibat, S.; Abu Al-Haija, Q. Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging
Classification. J. Imaging 2023, 9, 177. [CrossRef]
53. Lohaj, O.; Paralic, J.; Bednar, P.; Paralicova, Z.; Huba, M. Unraveling COVID-19 Dynamics via Machine Learning and XAI:
Investigating Variant Influence and Prognostic Classification. Mach. Learn. Knowl. Extr. 2023, 5, 1266–1281. [CrossRef]
54. Sarp, S.; Catak, F.O.; Kuzlu, M.; Cali, U.; Kusetogullari, H.; Zhao, Y.; Ates, G.; Guler, O. An XAI approach for COVID-19 detection
using transfer learning with X-ray images. Heliyon 2023, 9, e15137. [CrossRef]
55. Sargiani, V.; De Souza, A.A.; De Almeida, D.C.; Barcelos, T.S.; Munoz, R.; Da Silva, L.A. Supporting Clinical COVID-19 Diagnosis
with Routine Blood Tests Using Tree-Based Entropy Structured Self-Organizing Maps. Appl. Sci. 2022, 12, 5137. [CrossRef]
56. Zhang, X.; Han, L.; Sobeih, T.; Han, L.; Dempsey, N.; Lechareas, S.; Tridente, A.; Chen, H.; White, S.; Zhang, D. CXR-Net: A
Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia from Chest X-ray Images.
IEEE J. Biomed. Health Inform. 2023, 27, 980–991. [CrossRef]
57. Palatnik de Sousa, I.; Vellasco, M.M.B.R.; Costa da Silva, E. Explainable Artificial Intelligence for Bias Detection in COVID
CT-Scan Classifiers. Sensors 2021, 21, 5657. [CrossRef] [PubMed]
58. Nguyen, D.Q.; Vo, N.Q.; Nguyen, T.T.; Nguyen-An, K.; Nguyen, Q.H.; Tran, D.N.; Quan, T.T. BeCaked: An Explainable Artificial
Intelligence Model for COVID-19 Forecasting. Sci. Rep. 2022, 12, 7969. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 93 of 111
59. Guarrasi, V.; Soda, P. Multi-objective optimization determines when, which and how to fuse deep networks: An application to
predict COVID-19 outcomes. Comput. Biol. Med. 2023, 154, 106625. [CrossRef]
60. Alabdulhafith, M.; Saleh, H.; Elmannai, H.; Ali, Z.H.; El-Sappagh, S.; Hu, J.W.; El-Rashidy, N. A Clinical Decision Support System
for Edge/Cloud ICU Readmission Model Based on Particle Swarm Optimization, Ensemble Machine Learning, and Explainable
Artificial Intelligence. IEEE Access 2023, 11, 100604–100621. [CrossRef]
61. Henzel, J.; Tobiasz, J.; Kozielski, M.; Bach, M.; Foszner, P.; Gruca, A.; Kania, M.; Mika, J.; Papiez, A.; Werner, A.; et al. Screening
Support System Based on Patient Survey Data-Case Study on Classification of Initial, Locally Collected COVID-19 Data. Appl.
Sci. 2021, 11, 790. [CrossRef]
62. Delgado-Gallegos, J.L.; Aviles-Rodriguez, G.; Padilla-Rivas, G.R.; Cosio-Leon, M.d.l.A.; Franco-Villareal, H.; Nieto-Hipolito,
J.I.; Lopez, J.d.D.S.; Zuniga-Violante, E.; Islas, J.F.; Romo-Cardenas, G.S. Application of C5.0 Algorithm for the Assessment of
Perceived Stress in Healthcare Professionals Attending COVID-19. Brain Sci. 2023, 13, 513. [CrossRef] [PubMed]
63. Yigit, T.; Sengoz, N.; Ozmen, O.; Hemanth, J.; Isik, A.H. Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning. Trait. Signal 2022, 39, 863–869. [CrossRef]
64. Papandrianos, I.N.; Feleki, A.; Moustakidis, S.; Papageorgiou, I.E.; Apostolopoulos, I.D.; Apostolopoulos, D.J. An Explainable
Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM. Appl.
Sci. 2022, 12, 7592. [CrossRef]
65. Zhang, Y.; Weng, Y.; Lund, J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics 2022, 12, 237.
[CrossRef]
66. Rietberg, M.T.; Nguyen, V.B.; Geerdink, J.; Vijlbrief, O.; Seifert, C. Accurate and Reliable Classification of Unstructured Reports
on Their Diagnostic Goal Using BERT Models. Diagnostics 2023, 13, 1251. [CrossRef] [PubMed]
67. Ornek, A.H.; Ceylan, M. Explainable Artificial Intelligence (XAI): Classification of Medical Thermal Images of Neonates Using
Class Activation Maps. Trait. Signal 2021, 38, 1271–1279. [CrossRef]
68. Dindorf, C.; Konradi, J.; Wolf, C.; Taetz, B.; Bleser, G.; Huthwelker, J.; Werthmann, F.; Bartaguiz, E.; Kniepert, J.; Drees, P.; et al.
Classification and Automated Interpretation of Spinal Posture Data Using a Pathology-Independent Classifier and Explainable
Artificial Intelligence (XAI). Sensors 2021, 21, 6323. [CrossRef]
69. Sarp, S.; Kuzlu, M.; Wilson, E.; Cali, U.; Guler, O. The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound
Classification. Electronics 2021, 10, 1406. [CrossRef]
70. Wang, M.H.; Chong, K.K.l.; Lin, Z.; Yu, X.; Pan, Y. An Explainable Artificial Intelligence-Based Robustness Optimization Approach
for Age-Related Macular Degeneration Detection Based on Medical IOT Systems. Electronics 2023, 12, 2697. [CrossRef]
71. Kalyakulina, A.; Yusipov, I.; Kondakova, E.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Small immunological
clocks identified by deep learning and gradient boosting. Front. Immunol. 2023, 14, 1177611. [CrossRef]
72. Javed, A.R.; Khan, H.U.; Alomari, M.K.B.; Sarwar, M.U.; Asim, M.; Almadhor, A.S.; Khan, M.Z. Toward explainable AI-
empowered cognitive health assessment. Front. Public Health 2023, 11, 1024195. [CrossRef]
73. Valladares-Rodriguez, S.; Fernandez-Iglesias, M.J.; Anido-Rifon, L.E.; Pacheco-Lorenzo, M. Evaluation of the Predictive Ability
and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment. Electronics 2022,
11, 3424. [CrossRef]
74. Moreno-Sanchez, P.A. Improvement of a prediction model for heart failure survival through explainable artificial intelligence.
Front. Cardiovasc. Med. 2023, 10, 1219586. [CrossRef]
75. Katsushika, S.; Kodera, S.; Sawano, S.; Shinohara, H.; Setoguchi, N.; Tanabe, K.; Higashikuni, Y.; Takeda, N.; Fujiu, K.; Daimon,
M.; et al. An explainable artificial intelligence-enabled electrocardiogram analysis model for the classification of reduced left
ventricular function. Eur. Heart J.-Digit. Health 2023, 4, 254–264. [CrossRef] [PubMed]
76. Kamal, M.S.; Dey, N.; Chowdhury, L.; Hasan, S.I.; Santosh, K.C. Explainable AI for Glaucoma Prediction Analysis to Understand
Risk Factors in Treatment Planning. IEEE Trans. Instrum. Meas. 2022, 71, 2509209. [CrossRef]
77. Deperlioglu, O.; Kose, U.; Gupta, D.; Khanna, A.; Giampaolo, F.; Fortino, G. Explainable framework for Glaucoma diagnosis by
image processing and convolutional neural network synergy: Analysis with doctor evaluation. Future Gener. Comput.-Syst.- Int. J.
Escience 2022, 129, 152–169. [CrossRef]
78. Kim, Y.K.; Koo, J.H.; Lee, S.J.; Song, H.S.; Lee, M. Explainable Artificial Intelligence Warning Model Using an Ensemble Approach
for In-Hospital Cardiac Arrest Prediction: Retrospective Cohort Study. J. Med. Internet Res. 2023, 25, e48244. [CrossRef]
79. Obayya, M.; Nemri, N.; Nour, M.K.; Al Duhayyim, M.; Mohsen, H.; Rizwanullah, M.; Zamani, A.S.; Motwakel, A. Explainable
Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification. Appl. Sci. 2022, 12, 8749.
[CrossRef]
80. Ganguly, R.; Singh, D. Explainable Artificial Intelligence (XAI) for the Prediction of Diabetes Management: An Ensemble
Approach. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 158–163. [CrossRef]
81. Hendawi, R.; Li, J.; Roy, S. A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes
Predictions: Survey-Based User Study. JMIR Form. Res. 2023, 7, e50328. [CrossRef]
82. Maaroof, N.; Moreno, A.; Valls, A.; Jabreel, M.; Romero-Aroca, P. Multi-Class Fuzzy-LORE: A Method for Extracting Local and
Counterfactual Explanations Using Fuzzy Decision Trees. Electronics 2023, 12, 2215. [CrossRef]
83. Raza, A.; Tran, K.P.; Koehl, L.; Li, S. Designing ECG monitoring healthcare system with federated transfer learning and explainable
AI. Knowl.-Based Syst. 2022, 236, 107763. [CrossRef]
Appl. Sci. 2024, 14, 8884 94 of 111
84. Singh, P.; Sharma, A. Interpretation and Classification of Arrhythmia Using Deep Convolutional Network. IEEE Trans. Instrum.
Meas. 2022, 71, 2518512. [CrossRef]
85. Mollaei, N.; Fujao, C.; Silva, L.; Rodrigues, J.; Cepeda, C.; Gamboa, H. Human-Centered Explainable Artificial Intelligence:
Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms. Int. J. Environ. Res. Public Health
2022, 19, 9552. [CrossRef] [PubMed]
86. Petrauskas, V.; Jasinevicius, R.; Damuleviciene, G.; Liutkevicius, A.; Janaviciute, A.; Lesauskaite, V.; Knasiene, J.; Meskauskas,
Z.; Dovydaitis, J.; Kazanavicius, V.; et al. Explainable Artificial Intelligence-Based Decision Support System for Assessing the
Nutrition-Related Geriatric Syndromes. Appl. Sci. 2021, 11, 1763. [CrossRef]
87. George, R.; Ellis, B.; West, A.; Graff, A.; Weaver, S.; Abramowski, M.; Brown, K.; Kerr, L.; Lu, S.C.; Swisher, C.; et al. Ensuring fair,
safe, and interpretable artificial intelligence-based prediction tools in a real-world oncological setting. Commun. Med. 2023, 3, 88.
[CrossRef] [PubMed]
88. Ivanovic, M.; Autexier, S.; Kokkonidis, M.; Rust, J. Quality medical data management within an open AI architecture-cancer
patients case. Connect. Sci. 2023, 35, 2194581. [CrossRef]
89. Zhang, H.; Ogasawara, K. Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing. Bioengineering
2023, 10, 1070. [CrossRef]
90. Zlahtic, B.; Zavrsnik, J.; Vosner, H.B.; Kokol, P.; Suran, D.; Zavrsnik, T. Agile Machine Learning Model Development Using Data
Canyons in Medicine: A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement. Appl.
Sci. 2023, 13, 8329. [CrossRef]
91. Gouverneur, P.; Li, F.; Shirahama, K.; Luebke, L.; Adamczyk, W.M.; Szikszay, T.M.M.; Luedtke, K.; Grzegorzek, M. Explainable
Artificial Intelligence (XAI) in Pain Research: Understanding the Role of Electrodermal Activity for Automated Pain Recognition.
Sensors 2023, 23, 1959. [CrossRef]
92. Real, K.S.D.; Rubio, A. Discovering the mechanism of action of drugs with a sparse explainable network. Ebiomedicine 2023, 95,
104767. [CrossRef]
93. Park, A.; Lee, Y.; Nam, S. A performance evaluation of drug response prediction models for individual drugs. Sci. Rep. 2023,
13, 11911. [CrossRef] [PubMed]
94. Li, D.; Liu, Y.; Huang, J.; Wang, Z. A Trustworthy View on Explainable Artificial Intelligence Method Evaluation. Computer 2023,
56, 50–60. [CrossRef]
95. Chen, T.C.T.; Chiu, M.C. Evaluating the sustainability of smart technology applications in healthcare after the COVID-19
pandemic: A hybridising subjective and objective fuzzy group decision-making approach with explainable artificial intelligence.
Digit. Health 2022, 8, 20552076221136381. [CrossRef]
96. Bhatia, S.; Albarrak, A.S. A Blockchain-Driven Food Supply Chain Management Using QR Code and XAI-Faster RCNN
Architecture. Sustainability 2023, 15, 2579. [CrossRef]
97. Konradi, J.; Zajber, M.; Betz, U.; Drees, P.; Gerken, A.; Meine, H. AI-Based Detection of Aspiration for Video-Endoscopy with
Visual Aids in Meaningful Frames to Interpret the Model Outcome. Sensors 2022, 22, 9468. [CrossRef]
98. Aquino, G.; Costa, M.G.F.; Costa Filho, C.F.F. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models
in Human Activity Recognition Tasks. Sensors 2023, 23, 4409. [CrossRef]
99. Vijayvargiya, A.; Singh, P.; Kumar, R.; Dey, N. Hardware Implementation for Lower Limb Surface EMG Measurement and
Analysis Using Explainable AI for Activity Recognition. IEEE Trans. Instrum. Meas. 2022, 71, 2004909. [CrossRef]
100. Iliadou, E.; Su, Q.; Kikidis, D.; Bibas, T.; Kloukinas, C. Profiling hearing aid users through big data explainable artificial
intelligence techniques. Front. Neurol. 2022, 13, 933940. [CrossRef]
101. Wang, X.; Qiao, Y.; Cui, Y.; Ren, H.; Zhao, Y.; Linghu, L.; Ren, J.; Zhao, Z.; Chen, L.; Qiu, L. An explainable artificial intelligence
framework for risk prediction of COPD in smokers. BMC Public Health 2023, 23, 2164. [CrossRef] [PubMed]
102. Drobnic, F.; Starc, G.; Jurak, G.; Kos, A.; Pustisek, M. Explained Learning and Hyperparameter Optimization of Ensemble
Estimator on the Bio-Psycho-Social Features of Children and Adolescents. Electronics 2023, 12, 4097. [CrossRef]
103. Jeong, T.; Park, U.; Kang, S.W. Novel quantitative electroencephalogram feature image adapted for deep learning: Verification
through classification of Alzheimer’s disease dementia. Front. Neurosci. 2022, 16, 1033379. [CrossRef] [PubMed]
104. Varghese, A.; George, B.; Sherimon, V.; Al Shuaily, H.S. Enhancing Trust in Alzheimer’s Disease Classification using Explainable
Artificial Intelligence: Incorporating Local Post Hoc Explanations for a Glass-box Model. Bahrain Med. Bull. 2023, 45, 1471–1478.
105. Amoroso, N.; Quarto, S.; La Rocca, M.; Tangaro, S.; Monaco, A.; Bellotti, R. An eXplainability Artificial Intelligence approach to
brain connectivity in Alzheimer’s disease. Front. Aging Neurosci. 2023, 15, 1238065. [CrossRef] [PubMed]
106. Kamal, M.S.; Northcote, A.; Chowdhury, L.; Dey, N.; Gonzalez Crespo, R.; Herrera-Viedma, E. Alzheimer’s Patient Analysis
Using Image and Gene Expression Data and Explainable-AI to Present Associated Genes. IEEE Trans. Instrum. Meas. 2021,
70, 2513107. [CrossRef]
107. Hernandez, M.; Ramon-Julvez, U.; Ferraz, F.; Consortium, A. Explainable AI toward understanding the performance of the top
three TADPOLE Challenge methods in the forecast of Alzheimer’s disease diagnosis. PLoS ONE 2022, 17, e0264695. [CrossRef]
[PubMed]
108. El-Sappagh, S.; Alonso, J.M.; Islam, S.M.R.; Sultan, A.M.; Kwak, K.S. A multilayer multimodal detection and prediction model
based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 2021, 11, 2660. [CrossRef]
Appl. Sci. 2024, 14, 8884 95 of 111
109. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.A.; Shareef, B.; Ahsan, M.M.; Islam, M.K.; Miah, M.S.;
et al. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model.
IEEE Access 2024, 12, 8390–8412. [CrossRef]
110. Bhandari, N.; Walambe, R.; Kotecha, K.; Kaliya, M. Integrative gene expression analysis for the diagnosis of Parkinson’s disease
using machine learning and explainable AI. Comput. Biol. Med. 2023, 163, 107140. [CrossRef] [PubMed]
111. Kalyakulina, A.; Yusipov, I.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Disease classification for whole-blood
DNA methylation: Meta-analysis, missing values imputation, and XAI. Gigascience 2022, 11, giac097. [CrossRef] [PubMed]
112. McFall, G.P.; Bohn, L.; Gee, M.; Drouin, S.M.; Fah, H.; Han, W.; Li, L.; Camicioli, R.; Dixon, R.A. Identifying key multi-modal
predictors of incipient dementia in Parkinson’s disease: A machine learning analysis and Tree SHAP interpretation. Front. Aging
Neurosci. 2023, 15, 1124232. [CrossRef]
113. Pianpanit, T.; Lolak, S.; Sawangjai, P.; Sudhawiyangkul, T.; Wilaiprasitporn, T. Parkinson’s Disease Recognition Using SPECT
Image and Interpretable AI: A Tutorial. IEEE Sens. J. 2021, 21, 22304–22316. [CrossRef]
114. Kumar, A.; Manikandan, R.; Kose, U.; Gupta, D.; Satapathy, S.C. Doctor’s Dilemma: Evaluating an Explainable Subtractive
Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis. Acm Trans. Multimed. Comput. Commun. Appl.
2021, 17, 105. [CrossRef]
115. Gaur, L.; Bhandari, M.; Razdan, T.; Mallik, S.; Zhao, Z. Explanation-Driven Deep Learning Model for Prediction of Brain Tumour
Status Using MRI Image Data. Front. Genet. 2022, 13, 822666. [CrossRef]
116. Tasci, B. Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet. Diagnostics 2023, 13, 859.
[CrossRef] [PubMed]
117. Esmaeili, M.; Vettukattil, R.; Banitalebi, H.; Krogh, N.R.; Geitung, J.T. Explainable Artificial Intelligence for Human-Machine
Interaction in Brain Tumor Localization. J. Pers. Med. 2021, 11, 1213. [CrossRef] [PubMed]
118. Maqsood, S.; Damasevicius, R.; Maskeliunas, R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass
SVM. Medicina 2022, 58, 1090. [CrossRef]
119. Solorio-Ramirez, J.L.; Saldana-Perez, M.; Lytras, M.D.; Moreno-Ibarra, M.A.; Yanez-Marquez, C. Brain Hemorrhage Classification
in CT Scan Images Using Minimalist Machine Learning. Diagnostics 2021, 11, 1449. [CrossRef]
120. Andreu-Perez, J.; Emberson, L.L.; Kiani, M.; Filippetti, M.L.; Hagras, H.; Rigato, S. Explainable artificial intelligence based
analysis for interpreting infant fNIRS data in developmental cognitive neuroscience. Commun. Biol. 2021, 4, 1077. [CrossRef]
121. Hilal, A.M.; Issaoui, I.; Obayya, M.; Al-Wesabi, F.N.; Nemri, N.; Hamza, M.A.; Al Duhayyim, M.; Zamani, A.S. Modeling of
Explainable Artificial Intelligence for Biomedical Mental Disorder Diagnosis. CMC-Comput. Mater. Contin. 2022, 71, 3853–3867.
[CrossRef]
122. Vieira, J.C.; Guedes, L.A.; Santos, M.R.; Sanchez-Gendriz, I.; He, F.; Wei, H.L.; Guo, Y.; Zhao, Y. Using Explainable Artificial
Intelligence to Obtain Efficient Seizure-Detection Models Based on Electroencephalography Signals. Sensors 2023, 23, 9871.
[CrossRef]
123. Al-Hussaini, I.; Mitchell, C.S. SeizFt: Interpretable Machine Learning for Seizure Detection Using Wearables. Bioengineering 2023,
10, 918. [CrossRef]
124. Li, Z.; Li, R.; Zhou, Y.; Rasmy, L.; Zhi, D.; Zhu, P.; Dono, A.; Jiang, X.; Xu, H.; Esquenazi, Y.; et al. Prediction of Brain Metastases
Development in Patients with Lung Cancer by Explainable Artificial Intelligence from Electronic Health Records. JCO Clin.
Cancer Inform. 2023, 7, e2200141. [CrossRef] [PubMed]
125. Azam, H.; Tariq, H.; Shehzad, D.; Akbar, S.; Shah, H.; Khan, Z.A. Fully Automated Skull Stripping from Brain Magnetic
Resonance Images Using Mask RCNN-Based Deep Learning Neural Networks. Brain Sci. 2023, 13, 1255. [CrossRef] [PubMed]
126. Sasahara, K.; Shibata, M.; Sasabe, H.; Suzuki, T.; Takeuchi, K.; Umehara, K.; Kashiyama, E. Feature importance of machine
learning prediction models shows structurally active part and important physicochemical features in drug design. Drug Metab.
Pharmacokinet. 2021, 39, 100401. [CrossRef]
127. Wang, Q.; Huang, K.; Chandak, P.; Zitnik, M.; Gehlenborg, N. Extending the Nested Model for User-Centric XAI: A Design Study
on GNN-based Drug Repurposing. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1266–1276. [CrossRef]
128. Castiglione, F.; Nardini, C.; Onofri, E.; Pedicini, M.; Tieri, P. Explainable Drug Repurposing Approach from Biased Random
Walks. IEEE-Acm Trans. Comput. Biol. Bioinform. 2023, 20, 1009–1019. [CrossRef]
129. Jena, R.; Pradhan, B.; Gite, S.; Alamri, A.; Park, H.J. A new method to promptly evaluate spatial earthquake probability mapping
using an explainable artificial intelligence (XAI) model. Gondwana Res. 2023, 123, 54–67. [CrossRef]
130. Jena, R.; Shanableh, A.; Al-Ruzouq, R.; Pradhan, B.; Gibril, M.B.A.; Khalil, M.A.; Ghorbanzadeh, O.; Ganapathy, G.P.; Ghamisi, P.
Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula. Remote. Sens.
2023, 15, 2248. [CrossRef]
131. Alshehri, F.; Rahman, A. Coupling Machine and Deep Learning with Explainable Artificial Intelligence for Improving Prediction
of Groundwater Quality and Decision-Making in Arid Region, Saudi Arabia. Water 2023, 15, 2298. [CrossRef]
132. Clare, M.C.A.; Sonnewald, M.; Lguensat, R.; Deshayes, J.; Balaji, V. Explainable Artificial Intelligence for Bayesian Neural
Networks: Toward Trustworthy Predictions of Ocean Dynamics. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003162. [CrossRef]
133. Nunez, J.; Cortes, C.B.; Yanez, M.A. Explainable Artificial Intelligence in Hydrology: Interpreting Black-Box Snowmelt-Driven
Streamflow Predictions in an Arid Andean Basin of North-Central Chile. Water 2023, 15, 3369. [CrossRef]
Appl. Sci. 2024, 14, 8884 96 of 111
134. Kolevatova, A.; Riegler, M.A.; Cherubini, F.; Hu, X.; Hammer, H.L. Unraveling the Impact of Land Cover Changes on Climate
Using Machine Learning and Explainable Artificial Intelligence. Big Data Cogn. Comput. 2021, 5, 55. [CrossRef]
135. Xue, P.; Wagh, A.; Ma, G.; Wang, Y.; Yang, Y.; Liu, T.; Huang, C. Integrating Deep Learning and Hydrodynamic Modeling to
Improve the Great Lakes Forecast. Remote. Sens. 2022, 14, 2640. [CrossRef]
136. Huang, F.; Zhang, Y.; Zhang, Y.; Nourani, V.; Li, Q.; Li, L.; Shangguan, W. Towards interpreting machine learning models for
predicting soil moisture droughts. Environ. Res. Lett. 2023, 18, 074002. [CrossRef]
137. Huynh, T.M.T.; Ni, C.F.; Su, Y.S.; Nguyen, V.C.N.; Lee, I.H.; Lin, C.P.; Nguyen, H.H. Predicting Heavy Metal Concentrations in
Shallow Aquifer Systems Based on Low-Cost Physiochemical Parameters Using Machine Learning Techniques. Int. J. Environ.
Res. Public Health 2022, 19, 12180. [CrossRef] [PubMed]
138. Bandstra, M.S.; Curtis, J.C.; Ghawaly, J.M., Jr.; Jones, A.C.; Joshi, T.H.Y. Explaining machine-learning models for gamma-ray
detection and identification. PLoS ONE 2023, 18, e0286829. [CrossRef]
139. Andresini, G.; Appice, A.; Malerba, D. SILVIA: An eXplainable Framework to Map Bark Beetle Infestation in Sentinel-2 Images.
IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 10050–10066. [CrossRef]
140. van Stein, B.; Raponi, E.; Sadeghi, Z.; Bouman, N.; van Ham, R.; Back, T. A Comparison of Global Sensitivity Analysis Methods
for Explainable AI with an Application in Genomic Prediction. IEEE Access 2022, 10, 103364–103381. [CrossRef]
141. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Thai-Nghe, N.; Nguyen, T.G. Explainable Deep Learning Models with Gradient-Weighted
Class Activation Mapping for Smart Agriculture. IEEE Access 2023, 11, 83752–83762. [CrossRef]
142. Lysov, M.; Pukhkiy, K.; Vasiliev, E.; Getmanskaya, A.; Turlapov, V. Ensuring Explainability and Dimensionality Reduction in a
Multidimensional HSI World for Early XAI-Diagnostics of Plant Stress. Entropy 2023, 25, 801. [CrossRef]
143. Iatrou, M.; Karydas, C.; Tseni, X.; Mourelatos, S. Representation Learning with a Variational Autoencoder for Predicting Nitrogen
Requirement in Rice. Remote. Sens. 2022, 14, 5978. [CrossRef]
144. Zinonos, Z.; Gkelios, S.; Khalifeh, A.F.; Hadjimitsis, D.G.; Boutalis, Y.S.; Chatzichristofis, S.A. Grape Leaf Diseases Identification
System Using Convolutional Neural Networks and LoRa Technology. IEEE Access 2022, 10, 122–133. [CrossRef]
145. Danilevicz, M.F.; Gill, M.; Fernandez, C.G.T.; Petereit, J.; Upadhyaya, S.R.; Batley, J.; Bennamoun, M.; Edwards, D.; Bayer, P.E.
DNABERT-based explainable lncRNA identification in plant genome assemblies. Comput. Struct. Biotechnol. J. 2023, 21, 5676–5685.
[CrossRef]
146. Kim, M.; Kim, D.; Jin, D.; Kim, G. Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case
Study of Seoul Metropolitan Area, Korea. Land 2023, 12, 420. [CrossRef]
147. Galli, A.; Piscitelli, M.S.; Moscato, V.; Capozzoli, A. Bridging the gap between complexity and interpretability of a dataanalytics-
based process for benchmarking energy performance of buildings. Expert Syst. Appl. 2022, 206, 117649. [CrossRef]
148. Nguyen, D.D.; Tanveer, M.; Mai, H.N.; Pham, T.Q.D.; Khan, H.; Park, C.W.; Kim, G.M. Guiding the optimization of membraneless
microfluidic fuel cells via explainable artificial intelligence: Comparative analyses of multiple machine learning models and
investigation of key operating parameters. Fuel 2023, 349, 128742. [CrossRef]
149. Pandey, D.S.; Raza, H.; Bhattacharyya, S. Development of explainable AI-based predictive models for bubbling fluidised bed
gasification process. Fuel 2023, 351, 128971. [CrossRef]
150. Wongburi, P.; Park, J.K. Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network.
Sustainability 2022, 14, 6276. [CrossRef]
151. Aslam, N.; Khan, I.U.; Alansari, A.; Alrammah, M.; Alghwairy, A.; Alqahtani, R.; Alqahtani, R.; Almushikes, M.; Hashim, M.A.L.
Anomaly Detection Using Explainable Random Forest for the Prediction of Undesirable Events in Oil Wells. Appl. Comput. Intell.
Soft Comput. 2022, 2022, 1558381. [CrossRef]
152. Mardian, J.; Champagne, C.; Bonsal, B.; Berg, A. Understanding the Drivers of Drought Onset and Intensification in the Canadian
Prairies: Insights from Explainable Artificial Intelligence (XAI). J. Hydrometeorol. 2023, 24, 2035–2055. [CrossRef]
153. Youness, G.; Aalah, A. An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction. Aerospace 2023,
10, 474. [CrossRef]
154. Chowdhury, D.; Sinha, A.; Das, D. XAI-3DP: Diagnosis and Understanding Faults of 3-D Printer with Explainable Ensemble AI.
IEEE Sens. Lett. 2023, 7, 6000104. [CrossRef]
155. Chelgani, S.C.; Nasiri, H.; Tohry, A.; Heidari, H.R. Modeling industrial hydrocyclone operational variables by SHAP-CatBoost-A
“conscious lab” approach. Powder Technol. 2023, 420, 118416. [CrossRef]
156. Elkhawaga, G.; Abu-Elkheir, M.; Reichert, M. Explainability of Predictive Process Monitoring Results: Can You See My Data
Issues? Appl. Sci. 2022, 12, 8192. [CrossRef]
157. El-khawaga, G.; Abu-Elkheir, M.; Reichert, M. XAI in the Context of Predictive Process Monitoring: An Empirical Analysis
Framework. Algorithms 2022, 15, 199. [CrossRef]
158. Hanchate, A.; Bukkapatnam, S.T.S.; Lee, K.H.; Srivastava, A.; Kumara, S. Reprint of: Explainable AI (XAI)-driven vibration
sensing scheme for surface quality monitoring in a smart surface grinding process. J. Manuf. Process. 2023, 100, 64–74. [CrossRef]
159. Alfeo, A.L.L.; Cimino, M.G.C.A.; Vaglini, G. Degradation stage classification via interpretable feature learning. J. Manuf. Syst.
2022, 62, 972–983. [CrossRef]
160. Akyol, S.; Das, M.; Alatas, B. Modeling the Energy Consumption of R600a Gas in a Refrigeration System with New Explainable
Artificial Intelligence Methods Based on Hybrid Optimization. Biomimetics 2023, 8, 397. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 97 of 111
161. Sharma, K.V.; Sai, P.H.V.S.T.; Sharma, P.; Kanti, P.K.; Bhramara, P.; Akilu, S. Prognostic modeling of polydisperse SiO2 /Aqueous
glycerol nanofluids’ thermophysical profile using an explainable artificial intelligence (XAI) approach. Eng. Appl. Artif. Intell.
2023, 126, 106967. [CrossRef]
162. Kulasooriya, W.K.V.J.B.; Ranasinghe, R.S.S.; Perera, U.S.; Thisovithan, P.; Ekanayake, I.U.; Meddage, D.P.P. Modeling strength
characteristics of basalt fiber reinforced concrete using multiple explainable machine learning with a graphical user interface. Sci.
Rep. 2023, 13, 13138. [CrossRef]
163. Geetha, G.K.; Sim, S.H. Fast identification of concrete cracks using 1D deep learning and explainable artificial intelligence-based
analysis. Autom. Constr. 2022, 143, 104572. [CrossRef]
164. Noh, Y.R.; Khalid, S.; Kim, H.S.; Choi, S.K. Intelligent Fault Diagnosis of Robotic Strain Wave Gear Reducer Using Area-Metric-
Based Sampling. Mathematics 2023, 11, 4081. [CrossRef]
165. Gim, J.; Lin, C.Y.; Turng, L.S. In-mold condition-centered and explainable artificial intelligence-based (IMC-XAI) process
optimization for injection molding. J. Manuf. Syst. 2024, 72, 196–213. [CrossRef]
166. Rozanec, J.M.; Trajkova, E.; Lu, J.; Sarantinoudis, N.; Arampatzis, G.; Eirinakis, P.; Mourtos, I.; Onat, M.K.; Yilmaz, D.A.; Kosmerlj,
A.; et al. Cyber-Physical LPG Debutanizer Distillation Columns: Machine-Learning-Based Soft Sensors for Product Quality
Monitoring. Appl. Sci. 2021, 11, 1790. [CrossRef]
167. Bobek, S.; Kuk, M.; Szelazek, M.; Nalepa, G.J. Enhancing Cluster Analysis with Explainable AI and Multidimensional Cluster
Prototypes. IEEE Access 2022, 10, 101556–101574. [CrossRef]
168. Chen, T.C.T.; Lin, C.W.; Lin, Y.C. A fuzzy collaborative forecasting approach based on XAI applications for cycle time range
estimation. Appl. Soft Comput. 2024, 151, 111122. [CrossRef]
169. Lee, Y.; Roh, Y. An Expandable Yield Prediction Framework Using Explainable Artificial Intelligence for Semiconductor
Manufacturing. Appl. Sci. 2023, 13, 2660. [CrossRef]
170. Alqaralleh, B.A.Y.; Aldhaban, F.; AlQarallehs, E.A.; Al-Omari, A.H. Optimal Machine Learning Enabled Intrusion Detection in
Cyber-Physical System Environment. CMC-Comput. Mater. Contin. 2022, 72, 4691–4707. [CrossRef]
171. Younisse, R.; Ahmad, A.; Abu Al-Haija, Q. Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley
Additive Explanations (SHAP). Big Data Cogn. Comput. 2022, 6, 126. [CrossRef]
172. Larriva-Novo, X.; Sanchez-Zas, C.; Villagra, V.A.; Marin-Lopez, A.; Berrocal, J. Leveraging Explainable Artificial Intelligence in
Real-Time Cyberattack Identification: Intrusion Detection System Approach. Appl. Sci. 2023, 13, 8587. [CrossRef]
173. Mahbooba, B.; Timilsina, M.; Sahal, R.; Serrano, M. Explainable Artificial Intelligence (XAI) to Enhance Trust Management in
Intrusion Detection Systems Using Decision Tree Model. Complexity 2021, 2021, 6634811. [CrossRef]
174. Ferretti, C.; Saletta, M. Do Neural Transformers Learn Human-Defined Concepts? An Extensive Study in Source Code Processing
Domain. Algorithms 2022, 15, 449. [CrossRef]
175. Rjoub, G.; Bentahar, J.; Wahab, O.A.; Mizouni, R.; Song, A.; Cohen, R.; Otrok, H.; Mourad, A. A Survey on Explainable Artificial
Intelligence for Cybersecurity. IEEE Trans. Netw. Serv. Manag. 2023, 20, 5115–5140. [CrossRef]
176. Kuppa, A.; Le-Khac, N.A. Adversarial XAI Methods in Cybersecurity. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4924–4938.
[CrossRef]
177. Jo, J.; Cho, J.; Moon, J. A Malware Detection and Extraction Method for the Related Information Using the ViT Attention
Mechanism on Android Operating System. Appl. Sci. 2023, 13, 6839. [CrossRef]
178. Lin, Y.S.; Liu, Z.Y.; Chen, Y.A.; Wang, Y.S.; Chang, Y.L.; Hsu, W.H. xCos: An Explainable Cosine Metric for Face Verification Task.
ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 112. [CrossRef]
179. Lim, S.Y.; Chae, D.K.; Lee, S.C. Detecting Deepfake Voice Using Explainable Deep Learning Techniques. Appl. Sci. 2022, 12, 3926.
[CrossRef]
180. Zhang, Z.; Umar, S.; Al Hammadi, A.Y.; Yoon, S.; Damiani, E.; Ardagna, C.A.; Bena, N.; Yeun, C.Y. Explainable Data Poison
Attacks on Human Emotion Evaluation Systems Based on EEG Signals. IEEE Access 2023, 11, 18134–18147. [CrossRef]
181. Muna, R.K.; Hossain, M.I.; Alam, M.G.R.; Hassan, M.M.; Ianni, M.; Fortino, G. Demystifying machine learning models of massive
IoT attack detection with Explainable AI for sustainable and secure future smart cities. Internet Things 2023, 24, 100919. [CrossRef]
182. Luo, R.; Xing, J.; Chen, L.; Pan, Z.; Cai, X.; Li, Z.; Wang, J.; Ford, A. Glassboxing Deep Learning to Enhance Aircraft Detection
from SAR Imagery. Remote. Sens. 2021, 13, 3650. [CrossRef]
183. Perez-Landa, G.I.; Loyola-Gonzalez, O.; Medina-Perez, M.A. An Explainable Artificial Intelligence Model for Detecting Xenopho-
bic Tweets. Appl. Sci. 2021, 11, 10801. [CrossRef]
184. Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems
(X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [CrossRef]
185. Manoharan, H.; Yuvaraja, T.; Kuppusamy, R.; Radhakrishnan, A. Implementation of explainable artificial intelligence in
commercial communication systems using micro systems. Sci. Prog. 2023, 106, 00368504231191657. [CrossRef] [PubMed]
186. Berger, T. Explainable artificial intelligence and economic panel data: A study on volatility spillover along the supply chains.
Financ. Res. Lett. 2023, 54, 103757. [CrossRef]
187. Raval, J.; Bhattacharya, P.; Jadav, N.K.; Tanwar, S.; Sharma, G.; Bokoro, P.N.; Elmorsy, M.; Tolba, A.; Raboaca, M.S. RaKShA: A
Trusted Explainable LSTM Model to Classify Fraud Patterns on Credit Card Transactions. Mathematics 2023, 11, 1901. [CrossRef]
Appl. Sci. 2024, 14, 8884 98 of 111
188. Martinez, M.A.M.; Nadj, M.; Langner, M.; Toreini, P.; Maedche, A. Does this Explanation Help? Designing Local Model-agnostic
Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology. ACM Trans. Interact. Intell. Syst.
2023, 13, 27. [CrossRef]
189. Martins, T.; de Almeida, A.M.; Cardoso, E.; Nunes, L. Explainable Artificial Intelligence (XAI): A Systematic Literature Review on
Taxonomies and Applications in Finance. IEEE Access 2024, 12, 618–629. [CrossRef]
190. Moscato, V.; Picariello, A.; Sperli, G. A benchmark of machine learning approaches for credit score prediction. Expert Syst. Appl.
2021, 165, 113986. [CrossRef]
191. Gramespacher, T.; Posth, J.A. Employing Explainable AI to Optimize the Return Target Function of a Loan Portfolio. Front. Artif.
Intell. 2021, 4, 693022. [CrossRef]
192. Gramegna, A.; Giudici, P. SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Front. Artif. Intell. 2021, 4,
752558. [CrossRef]
193. Rudin, C.; Shaposhnik, Y. Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to
Credit-Risk Evaluation. J. Mach. Learn. Res. 2023, 24, 1–44. [CrossRef]
194. Torky, M.; Gad, I.; Hassanien, A.E. Explainable AI Model for Recognizing Financial Crisis Roots Based on Pigeon Optimization
and Gradient Boosting Model. Int. J. Comput. Intell. Syst. 2023, 16, 50. [CrossRef]
195. Bermudez, L.; Anaya, D.; Belles-Sampera, J. Explainable AI for paid-up risk management in life insurance products. Financ. Res.
Lett. 2023, 57, 104242. [CrossRef]
196. Rozanec, J.; Trajkova, E.; Kenda, K.; Fortuna, B.; Mladenic, D. Explaining Bad Forecasts in Global Time Series Models. Appl. Sci.
2021, 11, 9243. [CrossRef]
197. Kim, H.S.; Joe, I. An XAI method for convolutional neural networks in self-driving cars. PLoS ONE 2022, 17, e0267282. [CrossRef]
198. Veitch, E.; Alsos, O.A. Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles. J. Mar. Sci.
Eng. 2021, 9, 227. [CrossRef]
199. Dworak, D.; Baranowski, J. Adaptation of Grad-CAM Method to Neural Network Architecture for LiDAR Pointcloud Object
Detection. Energies 2022, 15, 4681. [CrossRef]
200. Renda, A.; Ducange, P.; Marcelloni, F.; Sabella, D.; Filippou, M.C.; Nardini, G.; Stea, G.; Virdis, A.; Micheli, D.; Rapone, D.; et al.
Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking. Information
2022, 13, 395. [CrossRef]
201. Lorente, M.P.S.; Lopez, E.M.; Florez, L.A.; Espino, A.L.; Martinez, J.A.I.; de Miguel, A.S. Explaining Deep Learning-Based Driver
Models. Appl. Sci. 2021, 11, 3321. [CrossRef]
202. Qaffas, A.A.; Ben HajKacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. An Explainable Artificial Intelligence Approach for Multi-Criteria
ABC Item Classification. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 848–866. [CrossRef]
203. Yilmazer, R.; Birant, D. Shelf Auditing Based on Image Classification Using Semi-Supervised Deep Learning to Increase On-Shelf
Availability in Grocery Stores. Sensors 2021, 21, 327. [CrossRef] [PubMed]
204. Lee, J.; Jung, O.; Lee, Y.; Kim, O.; Park, C. A Comparison and Interpretation of Machine Learning Algorithm for the Prediction of
Online Purchase Conversion. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1472–1491. [CrossRef]
205. Okazaki, K.; Inoue, K. Explainable Model Fusion for Customer Journey Mapping. Front. Artif. Intell. 2022, 5, 824197. [CrossRef]
206. Diaz, G.M.; Galan, J.J.; Carrasco, R.A. XAI for Churn Prediction in B2B Models: A Use Case in an Enterprise Software Company.
Mathematics 2022, 10, 3896. [CrossRef]
207. Matuszelanski, K.; Kopczewska, K. Customer Churn in Retail E-Commerce Business: Spatial and Machine Learning Approach.
J. Theor. Appl. Electron. Commer. Res. 2022, 17, 165–198. [CrossRef]
208. Pereira, F.D.; Fonseca, S.C.; Oliveira, E.H.T.; Cristea, I.A.; Bellhauser, H.; Rodrigues, L.; Oliveira, D.B.F.; Isotani, S.; Carvalho,
L.S.G. Explaining Individual and Collective Programming Students’ Behavior by Interpreting a Black-Box Predictive Model.
IEEE Access 2021, 9, 117097–117119. [CrossRef]
209. Alcauter, I.; Martinez-Villasenor, L.; Ponce, H. Explaining Factors of Student Attrition at Higher Education. Comput. Sist. 2023,
27, 929–940. [CrossRef]
210. Gomez-Cravioto, D.A.; Diaz-Ramos, R.E.; Hernandez-Gress, N.; Luis Preciado, J.; Ceballos, H.G. Supervised machine learning
predictive analytics for alumni income. J. Big Data 2022, 9, 11. [CrossRef]
211. Saarela, M.; Heilala, V.; Jaaskela, P.; Rantakaulio, A.; Karkkainen, T. Explainable Student Agency Analytics. IEEE Access 2021,
9, 137444–137459. [CrossRef]
212. Ramon, Y.; Farrokhnia, R.A.; Matz, S.C.; Martens, D. Explainable AI for Psychological Profiling from Behavioral Data: An
Application to Big Five Personality Predictions from Financial Transaction Records. Information 2021, 12, 518. [CrossRef]
213. Zytek, A.; Liu, D.; Vaithianathan, R.; Veeramachaneni, K. Sibyl: Understanding and Addressing the Usability Challenges of
Machine Learning In High-Stakes Decision Making. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1161–1171. [CrossRef] [PubMed]
214. Rodriguez Oconitrillo, L.R.; Jose Vargas, J.; Camacho, A.; Burgos, A.; Manuel Corchado, J. RYEL: An Experimental Study in the
Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial
Intelligence and Case-Based Reasoning. Electronics 2021, 10, 1500. [CrossRef]
215. Escobar-Linero, E.; Garcia-Jimenez, M.; Trigo-Sanchez, M.E.; Cala-Carrillo, M.J.; Sevillano, J.L.; Dominguez-Morales, M. Using
machine learning-based systems to help predict disengagement from the legal proceedings by women victims of intimate partner
violence in Spain. PLoS ONE 2023, 18, e0276032. [CrossRef]
Appl. Sci. 2024, 14, 8884 99 of 111
216. Sokhansanj, B.A.; Rosen, G.L. Predicting Institution Outcomes for Inter Partes Review (IPR) Proceedings at the United States
Patent Trial & Appeal Board by Deep Learning of Patent Owner Preliminary Response Briefs. Appl. Sci. 2022, 12, 3656. [CrossRef]
217. Cha, Y.; Lee, Y. Advanced sentence-embedding method considering token importance based on explainable artificial intelligence
and text summarization model. Neurocomputing 2024, 564, 126987. [CrossRef]
218. Sevastjanova, R.; Jentner, W.; Sperrle, F.; Kehlbeck, R.; Bernard, J.; El-assady, M. QuestionComb: A Gamification Approach for
the Visual Explanation of Linguistic Phenomena through Interactive Labeling. ACM Trans. Interact. Intell. Syst. 2021, 11, 19.
[CrossRef]
219. Sovrano, F.; Vitali, F. Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces.
ACM Trans. Interact. Intell. Syst. 2022, 12, 26. [CrossRef]
220. Kumar, A.; Dikshit, S.; Albuquerque, V.H.C. Explainable Artificial Intelligence for Sarcasm Detection in Dialogues. Wirel.
Commun. Mob. Comput. 2021, 2021, 2939334. [CrossRef]
221. de Velasco, M.; Justo, R.; Zorrilla, A.L.; Torres, M.I. Analysis of Deep Learning-Based Decision-Making in an Emotional
Spontaneous Speech Task. Appl. Sci. 2023, 13, 980. [CrossRef]
222. Huang, J.; Wu, X.; Wen, J.; Huang, C.; Luo, M.; Liu, L.; Zheng, Y. Evaluating Familiarity Ratings of Domain Concepts with
Interpretable Machine Learning: A Comparative Study. Appl. Sci. 2023, 13, 2818. [CrossRef]
223. Shah, A.; Ranka, P.; Dedhia, U.; Prasad, S.; Muni, S.; Bhowmick, K. Detecting and Unmasking AI-Generated Texts through
Explainable Artificial Intelligence using Stylistic Features. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1043–1053. [CrossRef]
224. Samih, A.; Ghadi, A.; Fennan, A. ExMrec2vec: Explainable Movie Recommender System based on Word2vec. Int. J. Adv. Comput.
Sci. Appl. 2021, 12, 653–660. [CrossRef]
225. Pisoni, G.; Diaz-Rodriguez, N.; Gijlers, H.; Tonolli, L. Human-Centered Artificial Intelligence for Designing Accessible Cultural
Heritage. Appl. Sci. 2021, 11, 870. [CrossRef]
226. Mishra, S.; Shukla, A.K.; Muhuri, P.K. Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient
and Explainable Solution. Axioms 2022, 11, 489. [CrossRef]
227. Sullivan, R.S.; Longo, L. Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations. Mach. Learn.
Knowl. Extr. 2023, 5, 1433–1455. [CrossRef]
228. Tao, J.; Xiong, Y.; Zhao, S.; Wu, R.; Shen, X.; Lyu, T.; Fan, C.; Hu, Z.; Zhao, S.; Pan, G. Explainable AI for Cheating Detection and
Churn Prediction in Online Games. IEEE Trans. Games 2023, 15, 242–251. [CrossRef]
229. Szczepanski, M.; Pawlicki, M.; Kozik, R.; Choras, M. New explainability method for BERT-based model in fake news detection.
Sci. Rep. 2021, 11, 23705. [CrossRef]
230. Liang, X.S.; Straub, J. Deceptive Online Content Detection Using Only Message Characteristics and a Machine Learning Trained
Expert System. Sensors 2021, 21, 7083. [CrossRef]
231. Gowrisankar, B.; Thing, V.L.L. An adversarial attack approach for eXplainable AI evaluation on deepfake detection models.
Comput. Secur. 2024, 139, 103684. [CrossRef]
232. Damian, S.; Calvo, H.; Gelbukh, A. Fake News detection using n-grams for PAN@CLEF competition. J. Intell. Fuzzy Syst. 2022,
42, 4633–4640. [CrossRef]
233. De Magistris, G.; Russo, S.; Roma, P.; Starczewski, J.T.; Napoli, C. An Explainable Fake News Detector Based on Named Entity
Recognition and Stance Classification Applied to COVID-19. Information 2022, 13, 137. [CrossRef]
234. Joshi, G.; Srivastava, A.; Yagnik, B.; Hasan, M.; Saiyed, Z.; Gabralla, L.A.; Abraham, A.; Walambe, R.; Kotecha, K. Explainable
Misinformation Detection across Multiple Social Media Platforms. IEEE Access 2023, 11, 23634–23646. [CrossRef]
235. Heimerl, A.; Weitz, K.; Baur, T.; Andre, E. Unraveling ML Models of Emotion with NOVA: Multi-Level Explainable AI for
Non-Experts. IEEE Trans. Affect. Comput. 2022, 13, 1155–1167. [CrossRef]
236. Beker, T.; Ansari, H.; Montazeri, S.; Song, Q.; Zhu, X.X. Deep Learning for Subtle Volcanic Deformation Detection with InSAR
Data in Central Volcanic Zone. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5218520. [CrossRef]
237. Khan, M.A.; Park, H.; Lombardi, M. Exploring Explainable Artificial Intelligence Techniques for Interpretable Neural Networks
in Traffic Sign Recognition Systems. Electronics 2024, 13, 306. [CrossRef]
238. Resendiz, J.L.D.; Ponomaryov, V.; Reyes, R.R.; Sadovnychiy, S. Explainable CAD System for Classification of Acute Lymphoblastic
Leukemia Based on a Robust White Blood Cell Segmentation. Cancers 2023, 15, 3376. [CrossRef]
239. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local
explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [CrossRef]
240. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International
Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777.
241. Bello, M.; Napoles, G.; Concepcion, L.; Bello, R.; Mesejo, P.; Cordon, O. REPROT: Explaining the predictions of complex deep
learning architectures for object detection through reducts of an image. Inf. Sci. 2024, 654, 119851. [CrossRef]
242. Fouladgar, N.; Alirezaie, M.; Framling, K. Metrics and Evaluations of Time Series Explanations: An Application in Affect
Computing. IEEE Access 2022, 10, 23995–24009. [CrossRef]
243. Arrotta, L.; Civitarese, G.; Bettini, C. DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments.
Proc. Acm Interact. Mob. Wearable Ubiquitous-Technol.-Imwut 2022, 6, 1. [CrossRef]
244. Astolfi, D.; De Caro, F.; Vaccaro, A. Condition Monitoring of Wind Turbine Systems by Explainable Artificial Intelligence
Techniques. Sensors 2023, 23, 5376. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 100 of 111
245. Jean-Quartier, C.; Bein, K.; Hejny, L.; Hofer, E.; Holzinger, A.; Jeanquartier, F. The Cost of Understanding-XAI Algorithms towards
Sustainable ML in the View of Computational Cost. Computation 2023, 11, 92. [CrossRef]
246. Stassin, S.; Corduant, V.; Mahmoudi, S.A.; Siebert, X. Explainability and Evaluation of Vision Transformers: An In-Depth
Experimental Study. Electronics 2024, 13, 175. [CrossRef]
247. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Ngoc, H.T.; Thai-Nghe, N. Tomato Health Monitoring System: Tomato Classification,
Detection, and Counting System Based on YOLOv8 Model with Explainable MobileNet Models Using Grad-CAM plus. IEEE
Access 2024, 12, 9719–9737. [CrossRef]
248. Varam, D.; Mitra, R.; Mkadmi, M.; Riyas, R.A.; Abuhani, D.A.; Dhou, S.; Alzaatreh, A. Wireless Capsule Endoscopy Image
Classification: An Explainable AI Approach. IEEE Access 2023, 11, 105262–105280. [CrossRef]
249. Bhambra, P.; Joachimi, B.; Lahav, O. Explaining deep learning of galaxy morphology with saliency mapping. Mon. Not. R. Astron.
Soc. 2022, 511, 5032–5041. [CrossRef]
250. Huang, F.; Zhang, Y.; Zhang, Y.; Wei, S.; Li, Q.; Li, L.; Jiang, S. Interpreting Conv-LSTM for Spatio-Temporal Soil Moisture
Prediction in China. Agriculture 2023, 13, 971. [CrossRef]
251. Wei, K.; Chen, B.; Zhang, J.; Fan, S.; Wu, K.; Liu, G.; Chen, D. Explainable Deep Learning Study for Leaf Disease Classification.
Agronomy 2022, 12, 1035. [CrossRef]
252. Jin, W.; Li, X.; Fatehi, M.; Hamarneh, G. Generating post-hoc explanation from deep neural networks for multi-modal medical
image analysis tasks. Methodsx 2023, 10, 102009. [CrossRef]
253. Song, Z.; Trozzi, F.; Tian, H.; Yin, C.; Tao, P. Mechanistic Insights into Enzyme Catalysis from Explaining Machine-Learned
Quantum Mechanical and Molecular Mechanical Minimum Energy Pathways. ACS Phys. Chem. Au 2022, 2, 316–330. [CrossRef]
254. Brdar, S.; Panic, M.; Matavulj, P.; Stankovic, M.; Bartolic, D.; Sikoparija, B. Explainable AI for unveiling deep learning pollen
classification model based on fusion of scattered light patterns and fluorescence spectroscopy. Sci. Rep. 2023, 13, 3205. [CrossRef]
[PubMed]
255. Ullah, I.; Rios, A.; Gala, V.; Mckeever, S. Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance
Propagation. Appl. Sci. 2022, 12, 136. [CrossRef]
256. Dong, S.; Jin, Y.; Bak, S.; Yoon, B.; Jeong, J. Explainable Convolutional Neural Network to Investigate Age-Related Changes in
Multi-Order Functional Connectivity. Electronics 2021, 10, 3020. [CrossRef]
257. Althoff, D.; Bazame, H.C.; Nascimento, J.G. Untangling hybrid hydrological models with explainable artificial intelligence.
H2Open J. 2021, 4, 13–28. [CrossRef]
258. Tiensuu, H.; Tamminen, S.; Puukko, E.; Roening, J. Evidence-Based and Explainable Smart Decision Support for Quality
Improvement in Stainless Steel Manufacturing. Appl. Sci. 2021, 11, 10897. [CrossRef]
259. Messner, W. From black box to clear box: A hypothesis testing framework for scalar regression problems using deep artificial
neural networks. Appl. Soft Comput. 2023, 146, 110729. [CrossRef]
260. Allen, B. An interpretable machine learning model of cross-sectional US county-level obesity prevalence using explainable
artificial intelligence. PLoS ONE 2023, 18, e0292341. [CrossRef]
261. Ilman, M.M.; Yavuz, S.; Taser, P.Y. Generalized Input Preshaping Vibration Control Approach for Multi-Link Flexible Manipulators
using Machine Intelligence. Mechatronics 2022, 82, 102735. [CrossRef]
262. Aghaeipoor, F.; Javidi, M.M.; Fernandez, A. IFC-BD: An Interpretable Fuzzy Classifier for Boosting Explainable Artificial
Intelligence in Big Data. IEEE Trans. Fuzzy Syst. 2022, 30, 830–840. [CrossRef]
263. Zaman, M.; Hassan, A. Fuzzy Heuristics and Decision Tree for Classification of Statistical Feature-Based Control Chart Patterns.
Symmetry 2021, 13, 110. [CrossRef]
264. Fernandez, G.; Aledo, J.A.; Gamez, J.A.; Puerta, J.M. Factual and Counterfactual Explanations in Fuzzy Classification Trees. IEEE
Trans. Fuzzy Syst. 2022, 30, 5484–5495. [CrossRef]
265. Gkalelis, N.; Daskalakis, D.; Mezaris, V. ViGAT: Bottom-Up Event Recognition and Explanation in Video Using Factorized Graph
Attention Network. IEEE Access 2022, 10, 108797–108816. [CrossRef]
266. Singha, M.; Pu, L.; Srivastava, G.; Ni, X.; Stanfield, B.A.; Uche, I.K.; Rider, P.J.F.; Kousoulas, K.G.; Ramanujam, J.; Brylinski, M.
Unlocking the Potential of Kinase Targets in Cancer: Insights from CancerOmicsNet, an AI-Driven Approach to Drug Response
Prediction in Cancer. Cancers 2023, 15, 4050. [CrossRef] [PubMed]
267. Shang, Y.; Tian, Y.; Zhou, M.; Zhou, T.; Lyu, K.; Wang, Z.; Xin, R.; Liang, T.; Zhu, S.; Li, J. EHR-Oriented Knowledge Graph
System: Toward Efficient Utilization of Non-Used Information Buried in Routine Clinical Practice. IEEE J. Biomed. Health Inform.
2021, 25, 2463–2475. [CrossRef]
268. Espinoza, J.L.; Dupont, C.L.; O’Rourke, A.; Beyhan, S.; Morales, P.; Spoering, A.; Meyer, K.J.; Chan, A.P.; Choi, Y.; Nierman,
W.C.; et al. Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence
approach. PLoS Comput. Biol. 2021, 17, e1008857. [CrossRef]
269. Altini, N.; Puro, E.; Taccogna, M.G.; Marino, F.; De Summa, S.; Saponaro, C.; Mattioli, E.; Zito, F.A.; Bevilacqua, V. Tumor
Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability.
Bioengineering 2023, 10, 396. [CrossRef]
270. Huelsmann, J.; Barbosa, J.; Steinke, F. Local Interpretable Explanations of Energy System Designs. Energies 2023, 16, 2161.
[CrossRef]
Appl. Sci. 2024, 14, 8884 101 of 111
271. Misitano, G.; Afsar, B.; Larraga, G.; Miettinen, K. Towards explainable interactive multiobjective optimization: R-XIMO. Auton.
Agents-Multi-Agent Syst. 2022, 36, 43. [CrossRef]
272. Neghawi, E.; Liu, Y. Analysing Semi-Supervised ConvNet Model Performance with Computation Processes. Mach. Learn. Knowl.
Extr. 2023, 5, 1848–1876. [CrossRef]
273. Serradilla, O.; Zugasti, E.; Ramirez de Okariz, J.; Rodriguez, J.; Zurutuza, U. Adaptable and Explainable Predictive Maintenance:
Semi-Supervised Deep Learning for Anomaly Detection and Diagnosis in Press Machine Data. Appl. Sci. 2021, 11, 7376.
[CrossRef]
274. Lin, C.S.; Wang, Y.C.F. Describe, Spot and Explain: Interpretable Representation Learning for Discriminative Visual Reasoning.
IEEE Trans. Image Process. 2023, 32, 2481–2492. [CrossRef] [PubMed]
275. Mohamed, E.; Sirlantzis, K.; Howells, G.; Hoque, S. Optimisation of Deep Learning Small-Object Detectors with Novel Explainable
Verification. Sensors 2022, 22, 5596. [CrossRef]
276. Krenn, M.; Kottmann, J.S.; Tischler, N.; Aspuru-Guzik, A. Conceptual Understanding through Efficient Automated Design of
Quantum Optical Experiments. Phys. Rev. X 2021, 11, 031044. [CrossRef]
277. Podgorelec, V.; Kokol, P.; Stiglic, B.; Rozman, I. Decision trees: An overview and their use in medicine. J. Med Syst. 2002,
26, 445–463. [CrossRef]
278. Thrun, M.C. Exploiting Distance-Based Structures in Data Using an Explainable AI for Stock Picking. Information 2022, 13, 51.
[CrossRef]
279. Carta, S.M.; Consoli, S.; Piras, L.; Podda, A.S.; Recupero, D.R. Explainable Machine Learning Exploiting News and Domain-
Specific Lexicon for Stock Market Forecasting. IEEE Access 2021, 9, 30193–30205. [CrossRef]
280. Almohimeed, A.; Saleh, H.; Mostafa, S.; Saad, R.M.A.; Talaat, A.S. Cervical Cancer Diagnosis Using Stacked Ensemble Model and
Optimized Feature Selection: An Explainable Artificial Intelligence Approach. Computers 2023, 12, 200. [CrossRef]
281. Chen, Z.; Lian, Z.; Xu, Z. Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance
Computing. Axioms 2023, 12, 997. [CrossRef]
282. Leite, D.; Skrjanc, I.; Blazic, S.; Zdesar, A.; Gomide, F. Interval incremental learning of interval data streams and application to
vehicle tracking. Inf. Sci. 2023, 630, 1–22. [CrossRef]
283. Antoniou, G.; Papadakis, E.; Baryannis, G. Mental Health Diagnosis: A Case for Explainable Artificial Intelligence. Int. J. Artif.
Intell. Tools 2022, 31, 2241003. [CrossRef]
284. Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current challenges and future opportunities
for XAI in machine learning-based clinical decision support systems: A systematic review. Appl. Sci. 2021, 11, 5088. [CrossRef]
285. Qaffas, A.A.; Ben Hajkacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. Interpretable Multi-Criteria ABC Analysis Based on Semi-
Supervised Clustering and Explainable Artificial Intelligence. IEEE Access 2023, 11, 43778–43792. [CrossRef]
286. Wickramasinghe, C.S.; Amarasinghe, K.; Marino, D.L.; Rieger, C.; Manic, M. Explainable Unsupervised Machine Learning for
Cyber-Physical Systems. IEEE Access 2021, 9, 131824–131843. [CrossRef]
287. Cui, Y.; Liu, T.; Che, W.; Chen, Z.; Wang, S. Teaching Machines to Read, Answer and Explain. IEEE-ACM Trans. Audio Speech
Lang. Process. 2022, 30, 1483–1492. [CrossRef]
288. Heuillet, A.; Couthouis, F.; Diaz-Rodriguez, N. Collective eXplainable AI: Explaining Cooperative Strategies and Agent
Contribution in Multiagent Reinforcement Learning with Shapley Values. IEEE Comput. Intell. Mag. 2022, 17, 59–71. [CrossRef]
289. Khanna, R.; Dodge, J.; Anderson, A.; Dikkala, R.; Irvine, J.; Shureih, Z.; Lam, K.H.; Matthews, C.R.; Lin, Z.; Kahng, M.; et al.
Finding Al’s Faults with AAR/AI An Empirical Study. ACM Trans. Interact. Intell. Syst. 2022, 12, 1. [CrossRef]
290. Klar, M.; Ruediger, P.; Schuermann, M.; Goeren, G.T.; Glatt, M.; Ravani, B.; Aurich, J.C. Explainable generative design in
manufacturing for reinforcement learning based factory layout planning. J. Manuf. Syst. 2024, 72, 74–92. [CrossRef]
291. Solis-Martin, D.; Galan-Paez, J.; Borrego-Diaz, J. On the Soundness of XAI in Prognostics and Health Management (PHM).
Information 2023, 14, 256. [CrossRef]
292. Mandler, H.; Weigand, B. Feature importance in neural networks as a means of interpretation for data-driven turbulence models.
Comput. Fluids 2023, 265, 105993. [CrossRef]
293. De Bosscher, B.C.D.; Ziabari, S.S.M.; Sharpanskykh, A. A comprehensive study of agent-based airport terminal operations using
surrogate modeling and simulation. Simul. Model. Pract. Theory 2023, 128, 102811. [CrossRef]
294. Wenninger, S.; Kaymakci, C.; Wiethe, C. Explainable long-term building energy consumption prediction using QLattice. Appl.
Energy 2022, 308, 118300. [CrossRef]
295. Schrills, T.; Franke, T. How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing
Awareness in Automated Insulin Delivery (AID) Systems. ACM Trans. Interact. Intell. Syst. 2023, 13, 25. [CrossRef]
296. Mehta, H.; Passi, K. Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI). Algorithms 2022, 15, 291.
[CrossRef]
297. Ge, W.; Wang, J.; Lin, T.; Tang, B.; Li, X. Explainable cyber threat behavior identification based on self-adversarial topic generation.
Comput. Secur. 2023, 132, 103369. [CrossRef]
298. Posada-Moreno, A.F.; Surya, N.; Trimpe, S. ECLAD: Extracting Concepts with Local Aggregated Descriptors. Pattern Recognit.
2024, 147, 110146. [CrossRef]
299. Zolanvari, M.; Yang, Z.; Khan, K.; Jain, R.; Meskin, N. TRUST XAI: Model-Agnostic Explanations for AI with a Case Study on
IIoT Security. IEEE Internet Things J. 2023, 10, 2967–2978. [CrossRef]
Appl. Sci. 2024, 14, 8884 102 of 111
300. Feng, J.; Wang, D.; Gu, Z. Bidirectional Flow Decision Tree for Reliable Remote Sensing Image Scene Classification. Remote. Sens.
2022, 14, 3943. [CrossRef]
301. Yin, S.; Li, H.; Sun, Y.; Ibrar, M.; Teng, L. Data Visualization Analysis Based on Explainable Artificial Intelligence: A Survey. IJLAI
Trans. Sci. Eng. 2024, 2, 13–20.
302. Meskauskas, Z.; Kazanavicius, E. About the New Methodology and XAI-Based Software Toolkit for Risk Assessment. Sustainability
2022, 14, 5496. [CrossRef]
303. Leem, S.; Oh, J.; So, D.; Moon, J. Towards Data-Driven Decision-Making in the Korean Film Industry: An XAI Model for Box
Office Analysis Using Dimension Reduction, Clustering, and Classification. Entropy 2023, 25, 571. [CrossRef]
304. Ayoub, O.; Troia, S.; Andreoletti, D.; Bianco, A.; Tornatore, M.; Giordano, S.; Rottondi, C. Towards explainable artificial intelligence
in optical networks: The use case of lightpath QoT estimation. J. Opt. Commun. Netw. 2023, 15, A26–A38. [CrossRef]
305. Aguilar, D.L.; Medina-Perez, M.A.; Loyola-Gonzalez, O.; Choo, K.K.R.; Bucheli-Susarrey, E. Towards an Interpretable Autoen-
coder: A Decision-Tree-Based Autoencoder and its Application in Anomaly Detection. IEEE Trans. Dependable Secur. Comput.
2023, 20, 1048–1059. [CrossRef]
306. del Castillo Torres, G.; Francesca Roig-Maimo, M.; Mascaro-Oliver, M.; Amengual-Alcover, E.; Mas-Sanso, R. Understanding
How CNNs Recognize Facial Expressions: A Case Study with LIME and CEM. Sensors 2023, 23, 131. [CrossRef] [PubMed]
307. Dewi, C.; Chen, R.C.; Yu, H.; Jiang, X. XAI for Image Captioning using SHAP. J. Inf. Sci. Eng. 2023, 39, 711–724. [CrossRef]
308. Alkhalaf, S.; Alturise, F.; Bahaddad, A.A.; Elnaim, B.M.E.; Shabana, S.; Abdel-Khalek, S.; Mansour, R.F. Adaptive Aquila Optimizer
with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers 2023, 15, 1492. [CrossRef]
309. Nascita, A.; Montieri, A.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescape, A. XAI Meets Mobile Traffic Classification: Understanding
and Improving Multimodal Deep Learning Architectures. IEEE Trans. Netw. Serv. Manag. 2021, 18, 4225–4246. [CrossRef]
310. Silva-Aravena, F.; Delafuente, H.N.; Gutierrez-Bahamondes, J.H.; Morales, J. A Hybrid Algorithm of ML and XAI to Prevent
Breast Cancer: A Strategy to Support Decision Making. Cancers 2023, 15, 2443. [CrossRef] [PubMed]
311. Bjorklund, A.; Henelius, A.; Oikarinen, E.; Kallonen, K.; Puolamaki, K. Explaining any black box model using real data. Front.
Comput. Sci. 2023, 5, 1143904. [CrossRef]
312. Dobrovolskis, A.; Kazanavicius, E.; Kizauskiene, L. Building XAI-Based Agents for IoT Systems. Appl. Sci. 2023, 13, 4040.
[CrossRef]
313. Perl, M.; Sun, Z.; Machlev, R.; Belikov, J.; Levy, K.Y.; Levron, Y. PMU placement for fault line location using neural additive
models-A global XAI technique. Int. J. Electr. Power Energy Syst. 2024, 155, 109573. [CrossRef]
314. Nwafor, O.; Okafor, E.; Aboushady, A.A.; Nwafor, C.; Zhou, C. Explainable Artificial Intelligence for Prediction of Non-Technical
Losses in Electricity Distribution Networks. IEEE Access 2023, 11, 73104–73115. [CrossRef]
315. Panagoulias, D.P.; Sarmas, E.; Marinakis, V.; Virvou, M.; Tsihrintzis, G.A.; Doukas, H. Intelligent Decision Support for Energy
Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics. Electronics 2023, 12, 4430. [CrossRef]
316. Kim, S.; Choo, S.; Park, D.; Park, H.; Nam, C.S.; Jung, J.Y.; Lee, S. Designing an XAI interface for BCI experts: A contextual design
for pragmatic explanation interface based on domain knowledge in a specific context. Int. J.-Hum.-Comput. Stud. 2023, 174,
103009. [CrossRef]
317. Wang, Z.; Joe, I. OISE: Optimized Input Sampling Explanation with a Saliency Map Based on the Black-Box Model. Appl. Sci.
2023, 13, 5886. [CrossRef]
318. Puechmorel, S. Pullback Bundles and the Geometry of Learning. Entropy 2023, 25, 1450. [CrossRef]
319. Machlev, R.; Perl, M.; Belikov, J.; Levy, K.Y.; Levron, Y. Measuring Explainability and Trustworthiness of Power Quality
Disturbances Classifiers Using XAI-Explainable Artificial Intelligence. IEEE Trans. Ind. Inform. 2022, 18, 5127–5137. [CrossRef]
320. Monteiro, W.R.; Reynoso-Meza, G. A multi-objective optimization design to generate surrogate machine learning models in
explainable artificial intelligence applications. Euro J. Decis. Process. 2023, 11, 100040. [CrossRef]
321. Shi, J.; Zou, W.; Zhang, C.; Tan, L.; Zou, Y.; Peng, Y.; Huo, W. CAMFuzz: Explainable Fuzzing with Local Interpretation.
Cybersecurity 2022, 5, 17. [CrossRef]
322. Igarashi, D.; Yee, J.; Yokoyama, Y.; Kusuno, H.; Tagawa, Y. The effects of secondary cavitation position on the velocity of a
laser-induced microjet extracted using explainable artificial intelligence. Phys. Fluids 2024, 36, 013317. [CrossRef]
323. Soto, J.L.; Uriguen, E.Z.; Garcia, X.D.C. Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using
Autoencoders. Appl. Sci. 2023, 13, 2912. [CrossRef]
324. Han, J.; Lee, Y. Explainable Artificial Intelligence-Based Competitive Factor Identification. ACM Trans. Knowl. Discov. Data 2022,
16, 10. [CrossRef]
325. Hasan, M.; Lu, M. Enhanced model tree for quantifying output variances due to random data sampling: Productivity prediction
applications. Autom. Constr. 2024, 158, 105218. [CrossRef]
326. Sajjad, U.; Hussain, I.; Hamid, K.; Ali, H.M.; Wang, C.C.; Yan, W.M. Liquid-to-vapor phase change heat transfer evaluation and
parameter sensitivity analysis of nanoporous surface coatings. Int. J. Heat Mass Transf. 2022, 194, 123088. [CrossRef]
327. Ravi, S.K.; Roy, I.; Roychowdhury, S.; Feng, B.; Ghosh, S.; Reynolds, C.; Umretiya, R.V.; Rebak, R.B.; Hoffman, A.K. Elucidating
precipitation in FeCrAl alloys through explainable AI: A case study. Comput. Mater. Sci. 2023, 230, 112440. [CrossRef]
328. Sauter, D.; Lodde, G.; Nensa, F.; Schadendorf, D.; Livingstone, E.; Kukuk, M. Validating Automatic Concept-Based Explanations
for AI-Based Digital Histopathology. Sensors 2022, 22, 5346. [CrossRef]
Appl. Sci. 2024, 14, 8884 103 of 111
329. Akilandeswari, P.; Eliazer, M.; Patil, R. Explainable AI-Reducing Costs, Finding the Optimal Path between Graphical Locations.
Int. J. Early Child. Spec. Educ. 2022, 14, 504–511. [CrossRef]
330. Aghaeipoor, F.; Sabokrou, M.; Fernandez, A. Fuzzy Rule-Based Explainer Systems for Deep Neural Networks: From Local
Explainability to Global Understanding. IEEE Trans. Fuzzy Syst. 2023, 31, 3069–3080. [CrossRef]
331. Lee, E.H.; Kim, H. Feature-Based Interpretation of the Deep Neural Network. Electronics 2021, 10, 2687. [CrossRef]
332. Hung, S.C.; Wu, H.C.; Tseng, M.H. Integrating Image Quality Enhancement Methods and Deep Learning Techniques for Remote
Sensing Scene Classification. Appl. Sci. 2021, 11, 1659. [CrossRef]
333. Heistrene, L.; Machlev, R.; Perl, M.; Belikov, J.; Baimel, D.; Levy, K.; Mannor, S.; Levron, Y. Explainability-based Trust Algorithm
for electricity price forecasting models. Energy AI 2023, 14, 100259. [CrossRef]
334. Ribeiro, D.; Matos, L.M.; Moreira, G.; Pilastri, A.; Cortez, P. Isolation Forests and Deep Autoencoders for Industrial Screw
Tightening Anomaly Detection. Computers 2022, 11, 54. [CrossRef]
335. Blomerus, N.; Cilliers, J.; Nel, W.; Blasch, E.; de Villiers, P. Feedback-Assisted Automatic Target and Clutter Discrimination
Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications. Remote. Sens. 2022, 14, 96.
[CrossRef]
336. Estivill-Castro, V.; Gilmore, E.; Hexel, R. Constructing Explainable Classifiers from the Start-Enabling Human-in-the Loop
Machine Learning. Information 2022, 13, 464. [CrossRef]
337. Angelotti, G.; Diaz-Rodriguez, N. Towards a more efficient computation of individual attribute and policy contribution for
post-hoc explanation of cooperative multi-agent systems using Myerson values. Knowl.-Based Syst. 2023, 260, 110189. [CrossRef]
338. Tang, R.; Liu, N.; Yang, F.; Zou, N.; Hu, X. Defense Against Explanation Manipulation. Front. Big Data 2022, 5, 704203. [CrossRef]
[PubMed]
339. Al-Sakkari, E.G.; Ragab, A.; So, T.M.Y.; Shokrollahi, M.; Dagdougui, H.; Navarri, P.; Elkamel, A.; Amazouz, M. Machine
learning-assisted selection of adsorption-based carbon dioxide capture materials. J. Environ. Chem. Eng. 2023, 11, 110732.
[CrossRef]
340. Apostolopoulos, I.D.; Apostolopoulos, D.J.; Papathanasiou, N.D. Deep Learning Methods to Reveal Important X-ray Features in
COVID-19 Detection: Investigation of Explainability and Feature Reproducibility. Reports 2022, 5, 20. [CrossRef]
341. Deramgozin, M.M.; Jovanovic, S.; Arevalillo-Herraez, M.; Ramzan, N.; Rabah, H. Attention-Enabled Lightweight Neural Network
Architecture for Detection of Action Unit Activation. IEEE Access 2023, 11, 117954–117970. [CrossRef]
342. Dassanayake, P.M.; Anjum, A.; Bashir, A.K.; Bacon, J.; Saleem, R.; Manning, W. A Deep Learning Based Explainable Control
System for Reconfigurable Networks of Edge Devices. IEEE Trans. Netw. Sci. Eng. 2022, 9, 7–19. [CrossRef]
343. Qayyum, F.; Khan, M.A.; Kim, D.H.; Ko, H.; Ryu, G.A. Explainable AI for Material Property Prediction Based on Energy Cloud:
A Shapley-Driven Approach. Materials 2023, 16, 7322. [CrossRef]
344. Lellep, M.; Prexl, J.; Eckhardt, B.; Linkmann, M. Interpreted machine learning in fluid dynamics: Explaining relaminarisation
events in wall-bounded shear flows. J. Fluid Mech. 2022, 942, A2. [CrossRef]
345. Bilc, S.; Groza, A.; Muntean, G.; Nicoara, S.D. Interleaving Automatic Segmentation and Expert Opinion for Retinal Conditions.
Diagnostics 2022, 12, 22. [CrossRef] [PubMed]
346. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al.
Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines
2022, 10, 551. [CrossRef] [PubMed]
347. Terzi, D.S.; Demirezen, U.; Sagiroglu, S. Explainable Credit Card Fraud Detection with Image Conversion. Adcaij-Adv. Distrib.
Comput. Artif. Intell. J. 2021, 10, 63–76. [CrossRef]
348. Kothadiya, D.R.; Bhatt, C.M.; Rehman, A.; Alamri, F.S.; Saba, T. SignExplainer: An Explainable AI-Enabled Framework for Sign
Language Recognition with Ensemble Learning. IEEE Access 2023, 11, 47410–47419. [CrossRef]
349. Slijepcevic, D.; Zeppelzauer, M.; Unglaube, F.; Kranzl, A.; Breiteneder, C.; Horsak, B. Explainable Machine Learning in Human
Gait Analysis: A Study on Children with Cerebral Palsy. IEEE Access 2023, 11, 65906–65923. [CrossRef]
350. Hwang, C.; Lee, T. E-SFD: Explainable Sensor Fault Detection in the ICS Anomaly Detection System. IEEE Access 2021,
9, 140470–140486. [CrossRef]
351. Rivera, A.J.; Munoz, J.C.; Perez-Goody, M.D.; de San Pedro, B.S.; Charte, F.; Elizondo, D.; Rodriguez, C.; Abolafia, M.L.; Perea, A.;
del Jesus, M.J. XAIRE: An ensemble-based methodology for determining the relative importance of variables in regression tasks.
Application to a hospital emergency department. Artif. Intell. Med. 2023, 137, 102494. [CrossRef]
352. Park, J.J.; Lee, S.; Shin, S.; Kim, M.; Park, J. Development of a Light and Accurate Nox Prediction Model for Diesel Engines Using
Machine Learning and Xai Methods. Int. J. Automot. Technol. 2023, 24, 559–571. [CrossRef]
353. Abdollahi, A.; Pradhan, B. Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI). Sensors 2021, 21, 4738.
[CrossRef]
354. Xie, Y.; Pongsakornsathien, N.; Gardi, A.; Sabatini, R. Explanation of Machine-Learning Solutions in Air-Traffic Management.
Aerospace 2021, 8, 224. [CrossRef]
355. Al-Hawawreh, M.; Moustafa, N. Explainable deep learning for attack intelligence and combating cyber-physical attacks. Ad Hoc
Netw. 2024, 153, 103329. [CrossRef]
356. Srisuchinnawong, A.; Homchanthanakul, J.; Manoonpong, P. NeuroVis: Real-Time Neural Information Measurement and
Visualization of Embodied Neural Systems. Front. Neural Circuits 2021, 15, 743101. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 104 of 111
357. Dai, B.; Shen, X.; Chen, L.Y.; Li, C.; Pan, W. Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed
Interpretation. Ann. Appl. Stat. 2023, 17, 2019–2038. [CrossRef]
358. Li, Z. Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and
XGBoost. Comput. Environ. Urban Syst. 2022, 96, 101845. [CrossRef]
359. Gonzalez-Gonzalez, J.; Garcia-Mendez, S.; De Arriba-Perez, F.; Gonzalez-Castano, F.J.; Barba-Seara, O. Explainable Automatic
Industrial Carbon Footprint Estimation from Bank Transaction Classification Using Natural Language Processing. IEEE Access
2022, 10, 126326–126338. [CrossRef]
360. Elayan, H.; Aloqaily, M.; Karray, F.; Guizani, M. Internet of Behavior and Explainable AI Systems for Influencing IoT Behavior.
IEEE Netw. 2023, 37, 62–68. [CrossRef]
361. Cheng, X.; Doosthosseini, A.; Kunkel, J. Improve the Deep Learning Models in Forestry Based on Explanations and Expertise.
Front. Plant Sci. 2022, 13, 902105. [CrossRef] [PubMed]
362. Qiu, W.; Chen, H.; Kaeberlein, M.; Lee, S.I. ExplaiNAble BioLogical Age (ENABL Age): An artificial intelligence framework for
interpretable biological age. Lancet Healthy Longev. 2023, 4, E711–E723. [CrossRef]
363. Abba, S.I.; Yassin, M.A.; Mubarak, A.S.; Shah, S.M.H.; Usman, J.; Oudah, A.Y.; Naganna, S.R.; Aljundi, I.H. Drinking Water
Resources Suitability Assessment Based on Pollution Index of Groundwater Using Improved Explainable Artificial Intelligence.
Sustainability 2023, 15, 5655. [CrossRef]
364. Martinez-Seras, A.; Del Ser, J.; Lobo, J.L.; Garcia-Bringas, P.; Kasabov, N. A novel Out-of-Distribution detection approach for
Spiking Neural Networks: Design, fusion, performance evaluation and explainability. Inf. Fusion 2023, 100, 101943. [CrossRef]
365. Krupp, L.; Wiede, C.; Friedhoff, J.; Grabmaier, A. Explainable Remaining Tool Life Prediction for Individualized Production
Using Automated Machine Learning. Sensors 2023, 23, 8523. [CrossRef] [PubMed]
366. Nayebi, A.; Tipirneni, S.; Reddy, C.K.; Foreman, B.; Subbian, V. WindowSHAP: An efficient framework for explaining time-series
classifiers based on Shapley values. J. Biomed. Inform. 2023, 144, 104438. [CrossRef] [PubMed]
367. Lee, J.; Jeong, J.; Jung, S.; Moon, J.; Rho, S. Verification of De-Identification Techniques for Personal Information Using Tree-Based
Methods with Shapley Values. J. Pers. Med. 2022, 12, 190. [CrossRef]
368. Nahiduzzaman, M.; Chowdhury, M.E.H.; Salam, A.; Nahid, E.; Ahmed, F.; Al-Emadi, N.; Ayari, M.A.; Khandakar, A.; Haider, J.
Explainable deep learning model for automatic mulberry leaf disease classification. Front. Plant Sci. 2023, 14, 1175515. [CrossRef]
[PubMed]
369. Khan, A.; Ul Haq, I.; Hussain, T.; Muhammad, K.; Hijji, M.; Sajjad, M.; De Albuquerque, V.H.C.; Baik, S.W. PMAL: A Proxy Model
Active Learning Approach for Vision Based Industrial Applications. ACM Trans. Multimed. Comput. Commun. Appl. 2022, 18, 123.
[CrossRef]
370. Beucher, A.; Rasmussen, C.B.; Moeslund, T.B.; Greve, M.H. Interpretation of Convolutional Neural Networks for Acid Sulfate
Soil Classification. Front. Environ. Sci. 2022, 9, 809995. [CrossRef]
371. Kui, B.; Pinter, J.; Molontay, R.; Nagy, M.; Farkas, N.; Gede, N.; Vincze, A.; Bajor, J.; Godi, S.; Czimmer, J.; et al. EASY-APP: An
artificial intelligence model and application for early and easy prediction of severity in acute pancreatitis. Clin. Transl. Med. 2022,
12, e842. [CrossRef]
372. Szandala, T. Unlocking the black box of CNNs: Visualising the decision-making process with PRISM. Inf. Sci. 2023, 642, 119162.
[CrossRef]
373. Rengasamy, D.; Rothwell, B.C.; Figueredo, G.P. Towards a More Reliable Interpretation of Machine Learning Outputs for
Safety-Critical Systems Using Feature Importance Fusion. Appl. Sci. 2021, 11, 1854. [CrossRef]
374. Jahin, M.A.; Shovon, M.S.H.; Islam, M.S.; Shin, J.; Mridha, M.F.; Okuyama, Y. QAmplifyNet: Pushing the boundaries of supply
chain backorder prediction using interpretable hybrid quantum-classical neural network. Sci. Rep. 2023, 13, 18246. [CrossRef]
[PubMed]
375. Nielsen, I.E.; Ramachandran, R.P.; Bouaynaya, N.; Fathallah-Shaykh, H.M.; Rasool, G. EvalAttAI: A Holistic Approach to
Evaluating Attribution Maps in Robust and Non-Robust Models. IEEE Access 2023, 11, 82556–82569. [CrossRef]
376. Hashem, H.A.; Abdulazeem, Y.; Labib, L.M.; Elhosseini, M.A.; Shehata, M. An Integrated Machine Learning-Based Brain
Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. Sensors 2023, 23, 3171. [CrossRef] [PubMed]
377. Lin, R.; Wichadakul, D. Interpretable Deep Learning Model Reveals Subsequences of Various Functions for Long Non-Coding
RNA Identification. Front. Genet. 2022, 13, 876721. [CrossRef]
378. Chen, H.; Yang, L.; Wu, Q. Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning
Approach Using Google Earth Engine. Remote. Sens. 2023, 15, 4585. [CrossRef]
379. Oveis, A.H.; Giusti, E.; Ghio, S.; Meucci, G.; Martorella, M. LIME-Assisted Automatic Target Recognition with SAR Images:
Toward Incremental Learning and Explainability. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 9175–9192. [CrossRef]
380. Llorca-Schenk, J.; Rico-Juan, J.R.; Sanchez-Lozano, M. Designing porthole aluminium extrusion dies on the basis of eXplainable
Artificial Intelligence. Expert Syst. Appl. 2023, 222, 119808. [CrossRef]
381. Diaz, G.M.; Hernandez, J.J.G.; Salvador, J.L.G. Analyzing Employee Attrition Using Explainable AI for Strategic HR Decision-
Making. Mathematics 2023, 11, 4677. [CrossRef]
382. Pelaez-Rodriguez, C.; Marina, C.M.; Perez-Aracil, J.; Casanova-Mateo, C.; Salcedo-Sanz, S. Extreme Low-Visibility Events
Prediction Based on Inductive and Evolutionary Decision Rules: An Explicability-Based Approach. Atmosphere 2023, 14, 542.
[CrossRef]
Appl. Sci. 2024, 14, 8884 105 of 111
383. An, J.; Zhang, Y.; Joe, I. Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models. Appl. Sci. 2023, 13,
8782. [CrossRef]
384. Glick, A.; Clayton, M.; Angelov, N.; Chang, J. Impact of explainable artificial intelligence assistance on clinical decision-making of
novice dental clinicians. JAMIA Open 2022, 5, ooac031. [CrossRef] [PubMed]
385. Qureshi, Y.M.; Voloshin, V.; Facchinelli, L.; McCall, P.J.; Chervova, O.; Towers, C.E.; Covington, J.A.; Towers, D.P. Finding a
Husband: Using Explainable AI to Define Male Mosquito Flight Differences. Biology 2023, 12, 496. [CrossRef] [PubMed]
386. Wen, B.; Wang, N.; Subbalakshmi, K.; Chandramouli, R. Revealing the Roles of Part-of-Speech Taggers in Alzheimer Disease
Detection: Scientific Discovery Using One-Intervention Causal Explanation. JMIR Form. Res. 2023, 7, e36590. [CrossRef]
[PubMed]
387. Alvey, B.; Anderson, D.; Keller, J.; Buck, A. Linguistic Explanations of Black Box Deep Learning Detectors on Simulated Aerial
Drone Imagery. Sensors 2023, 23, 6879. [CrossRef] [PubMed]
388. Hou, B.; Gao, J.; Guo, X.; Baker, T.; Zhang, Y.; Wen, Y.; Liu, Z. Mitigating the Backdoor Attack by Federated Filters for Industrial
IoT Applications. IEEE Trans. Ind. Inform. 2022, 18, 3562–3571. [CrossRef]
389. Nakagawa, P.I.; Pires, L.F.; Moreira, J.L.R.; Santos, L.O.B.d.S.; Bukhsh, F. Semantic Description of Explainable Machine Learning
Workflows for Improving Trust. Appl. Sci. 2021, 11, 804. [CrossRef]
390. Yang, M.; Moon, J.; Yang, S.; Oh, H.; Lee, S.; Kim, Y.; Jeong, J. Design and Implementation of an Explainable Bidirectional LSTM
Model Based on Transition System Approach for Cooperative AI-Workers. Appl. Sci. 2022, 12, 6390. [CrossRef]
391. O’Shea, R.; Manickavasagar, T.; Horst, C.; Hughes, D.; Cusack, J.; Tsoka, S.; Cook, G.; Goh, V. Weakly supervised segmentation
models as explainable radiological classifiers for lung tumour detection on CT images. Insights Imaging 2023, 14, 195. [CrossRef]
[PubMed]
392. Tasnim, N.; Al Mamun, S.; Shahidul Islam, M.; Kaiser, M.S.; Mahmud, M. Explainable Mortality Prediction Model for Congestive
Heart Failure with Nature-Based Feature Selection Method. Appl. Sci. 2023, 13, 6138. [CrossRef]
393. Marques-Silva, J.; Ignatiev, A. No silver bullet: Interpretable ML models must be explained. Front. Artif. Intell. 2023, 6, 1128212.
[CrossRef] [PubMed]
394. Pedraza, A.; del Rio, D.; Bautista-Juzgado, V.; Fernandez-Lopez, A.; Sanz-Andres, A. Study of the Feasibility of Decoupling
Temperature and Strain from a f -PA-OFDR over an SMF Using Neural Networks. Sensors 2023, 23, 5515. [CrossRef] [PubMed]
395. Kwon, S.; Lee, Y. Explainability-Based Mix-Up Approach for Text Data Augmentation. ACM Trans. Knowl. Discov. Data 2023,
17, 13. [CrossRef]
396. Rosenberg, G.; Brubaker, J.K.; Schuetz, M.J.A.; Salton, G.; Zhu, Z.; Zhu, E.Y.; Kadioglu, S.; Borujeni, S.E.; Katzgraber, H.G.
Explainable Artificial Intelligence Using Expressive Boolean Formulas. Mach. Learn. Knowl. Extr. 2023, 5, 1760–1795. [CrossRef]
397. O’Sullivan, C.M.; Deo, R.C.; Ghahramani, A. Explainable AI approach with original vegetation data classifies spatio-temporal
nitrogen in flows from ungauged catchments to the Great Barrier Reef. Sci. Rep. 2023, 13, 18145. [CrossRef]
398. Richter, Y.; Balal, N.; Pinhasi, Y. Neural-Network-Based Target Classification and Range Detection by CW MMW Radar. Remote.
Sens. 2023, 15, 4553. [CrossRef]
399. Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021,
9, 28272–28281. [CrossRef]
400. Murala, D.K.; Panda, S.K.; Dash, S.P. MedMetaverse: Medical Care of Chronic Disease Patients and Managing Data Using
Artificial Intelligence, Blockchain, and Wearable Devices State-of-the-Art Methodology. IEEE Access 2023, 11, 138954–138985.
[CrossRef]
401. Brakefield, W.S.; Ammar, N.; Shaban-Nejad, A. An Urban Population Health Observatory for Disease Causal Pathway Analysis
and Decision Support: Underlying Explainable Artificial Intelligence Model. JMIR Form. Res. 2022, 6, e36055. [CrossRef]
402. Ortega, A.; Fierrez, J.; Morales, A.; Wang, Z.; de la Cruz, M.; Alonso, C.L.; Ribeiro, T. Symbolic AI for XAI: Evaluating LFIT
Inductive Programming for Explaining Biases in Machine Learning. Computers 2021, 10, 154. [CrossRef]
403. An, J.; Joe, I. Attention Map-Guided Visual Explanations for Deep Neural Networks. Appl. Sci. 2022, 12, 3846. [CrossRef]
404. Huang, X.; Sun, Y.; Feng, S.; Ye, Y.; Li, X. Better Visual Interpretation for Remote Sensing Scene Classification. IEEE Geosci. Remote.
Sens. Lett. 2022, 19, 6504305. [CrossRef]
405. Senocak, A.U.G.; Yilmaz, M.T.; Kalkan, S.; Yucel, I.; Amjad, M. An explainable two-stage machine learning approach for
precipitation forecast. J. Hydrol. 2023, 627, 130375. [CrossRef]
406. Kalutharage, C.S.; Liu, X.; Chrysoulas, C.; Pitropakis, N.; Papadopoulos, P. Explainable AI-Based DDOS Attack Identification
Method for IoT Networks. Computers 2023, 12, 32. [CrossRef]
407. Sorayaie Azar, A.; Naemi, A.; Babaei Rikan, S.; Mohasefi, J.B.; Pirnejad, H.; Wiil, U.K. Monkeypox detection using deep neural
networks. BMC Infect. Dis. 2023, 23, 438. [CrossRef] [PubMed]
408. Di Stefano, V.; Prinzi, F.; Luigetti, M.; Russo, M.; Tozza, S.; Alonge, P.; Romano, A.; Sciarrone, M.A.; Vitali, F.; Mazzeo, A.; et al.
Machine Learning for Early Diagnosis of ATTRv Amyloidosis in Non-Endemic Areas: A Multicenter Study from Italy. Brain Sci.
2023, 13, 805. [CrossRef] [PubMed]
409. Huong, T.T.; Bac, T.P.; Ha, K.N.; Hoang, N.V.; Hoang, N.X.; Hung, N.T.; Tran, K.P. Federated Learning-Based Explainable
Anomaly Detection for Industrial Control Systems. IEEE Access 2022, 10, 53854–53872. [CrossRef]
410. Diefenbach, S.; Christoforakos, L.; Ullrich, D.; Butz, A. Invisible but Understandable: In Search of the Sweet Spot between
Technology Invisibility and Transparency in Smart Spaces and Beyond. Multimodal Technol. Interact. 2022, 6, 95. [CrossRef]
Appl. Sci. 2024, 14, 8884 106 of 111
411. Patel, J.; Amipara, C.; Ahanger, T.A.; Ladhva, K.; Gupta, R.K.; Alsaab, H.O.O.; Althobaiti, Y.S.S.; Ratna, R. A Machine Learning-
Based Water Potability Prediction Model by Using Synthetic Minority Oversampling Technique and Explainable AI. Comput.
Intell. Neurosci. 2022, 2022, 9283293. [CrossRef]
412. Kim, J.K.; Lee, K.; Hong, S.G. Cognitive Load Recognition Based on T-Test and SHAP from Wristband Sensors. Hum.-Centric
Comput. Inf. Sci. 2023, 13. [CrossRef]
413. Schroeder, M.; Zamanian, A.; Ahmidi, N. What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep
Time Series Classification. Mach. Learn. Knowl. Extr. 2023, 5, 539–559. [CrossRef]
414. Singh, A.; Pannu, H.; Malhi, A. Explainable Information Retrieval using Deep Learning for Medical images. Comput. Sci. Inf.
Syst. 2022, 19, 277–307. [CrossRef]
415. Kumara, I.; Ariz, M.H.; Chhetri, M.B.; Mohammadi, M.; Van Den Heuvel, W.J.; Tamburri, D.A. FOCloud: Feature Model Guided
Performance Prediction and Explanation for Deployment Configurable Cloud Applications. IEEE Trans. Serv. Comput. 2023,
16, 302–314. [CrossRef]
416. Konforti, Y.; Shpigler, A.; Lerner, B.; Bar-Hillel, A. SIGN: Statistical Inference Graphs Based on Probabilistic Network Activity
Interpretation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3783–3797. [CrossRef] [PubMed]
417. Oblak, T.; Haraksim, R.; Beslay, L.; Peer, P. Probabilistic Fingermark Quality Assessment with Quality Region Localisation.
Sensors 2023, 23, 4006. [CrossRef]
418. Le, T.T.H.; Kang, H.; Kim, H. Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial
Images with Different Patch Sizes and Perturbation Ratios. IEEE Access 2021, 9, 133049–133061. [CrossRef]
419. Capuozzo, S.; Gravina, M.; Gatta, G.; Marrone, S.; Sansone, C. A Multimodal Knowledge-Based Deep Learning Approach for
MGMT Promoter Methylation Identification. J. Imaging 2022, 8, 321. [CrossRef] [PubMed]
420. Vo, H.T.; Thien, N.N.; Mui, K.C. A Deep Transfer Learning Approach for Accurate Dragon Fruit Ripeness Classification and
Visual Explanation using Grad-CAM. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1344–1352. [CrossRef]
421. Artelt, A.; Hammer, B. Efficient computation of counterfactual explanations and counterfactual metrics of prototype-based
classifiers. Neurocomputing 2022, 470, 304–317. [CrossRef]
422. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based
on Stochastic Searching on the Line. Electronics 2021, 10, 2107. [CrossRef]
423. Pandiyan, V.; Wrobel, R.; Leinenbach, C.; Shevchik, S. Optimizing in-situ monitoring for laser powder bed fusion process:
Deciphering acoustic emission and sensor sensitivity with explainable machine learning. J. Mater. Process. Technol. 2023,
321, 118144. [CrossRef]
424. Jeon, M.; Kim, T.; Kim, S.; Lee, C.; Youn, C.H. Recursive Visual Explanations Mediation Scheme Based on DropAttention Model
with Multiple Episodes Pool. IEEE Access 2023, 11, 4306–4321. [CrossRef]
425. Jia, B.; Qiao, W.; Zong, Z.; Liu, S.; Hijji, M.; Del Ser, J.; Muhammadh, K. A fingerprint-based localization algorithm based on
LSTM and data expansion method for sparse samples. Future Gener. Comput.-Syst.- Int. J. Escience 2022, 137, 380–393. [CrossRef]
426. Munkhdalai, L.; Munkhdalai, T.; Pham, V.H.; Hong, J.E.; Ryu, K.H.; Theera-Umpon, N. Neural Network-Augmented Locally
Adaptive Linear Regression Model for Tabular Data. Sustainability 2022, 14, 5273. [CrossRef]
427. Gouabou, A.C.F.; Collenne, J.; Monnier, J.; Iguernaissi, R.; Damoiseaux, J.L.; Moudafi, A.; Merad, D. Computer Aided Diagnosis
of Melanoma Using Deep Neural Networks and Game Theory: Application on Dermoscopic Images of Skin Lesions. Int. J. Mol.
Sci. 2022, 23, 3838. [CrossRef] [PubMed]
428. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Extending the Tsetlin Machine with Integer-Weighted Clauses for Increased
Interpretability. IEEE Access 2021, 9, 8233–8248. [CrossRef]
429. Nagaoka, T.; Kozuka, T.; Yamada, T.; Habe, H.; Nemoto, M.; Tada, M.; Abe, K.; Handa, H.; Yoshida, H.; Ishii, K.; et al. A Deep
Learning System to Diagnose COVID-19 Pneumonia Using Masked Lung CT Images to Avoid AI-generated COVID-19 Diagnoses
that Include Data outside the Lungs. Adv. Biomed. Eng. 2022, 11, 76–86. [CrossRef]
430. Ali, S.; Hussain, A.; Bhattacharjee, S.; Athar, A.; Abdullah, A.; Kim, H.C. Detection of COVID-19 in X-ray Images Using Densely
Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box
Model. Sensors 2022, 22, 9983. [CrossRef]
431. Elbagoury, B.M.; Vladareanu, L.; Vladareanu, V.; Salem, A.B.; Travediu, A.M.; Roushdy, M.I. A Hybrid Stacked CNN and
Residual Feedback GMDH-LSTM Deep Learning Model for Stroke Prediction Applied on Mobile AI Smart Hospital Platform.
Sensors 2023, 23, 3500. [CrossRef] [PubMed]
432. Yuan, L.; Andrews, J.; Mu, H.; Vakil, A.; Ewing, R.; Blasch, E.; Li, J. Interpretable Passive Multi-Modal Sensor Fusion for Human
Identification and Activity Recognition. Sensors 2022, 22, 5787. [CrossRef]
433. Someetheram, V.; Marsani, M.F.; Mohd Kasihmuddin, M.S.; Zamri, N.E.; Muhammad Sidik, S.S.; Mohd Jamaludin, S.Z.; Mansor,
M.A. Random Maximum 2 Satisfiability Logic in Discrete Hopfield Neural Network Incorporating Improved Election Algorithm.
Mathematics 2022, 10, 4734. [CrossRef]
434. Sudars, K.; Namatevs, I.; Ozols, K. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-
Based Explainability Approach. J. Imaging 2022, 8, 30. [CrossRef] [PubMed]
435. Aslam, N.; Khan, I.U.; Bader, S.A.; Alansari, A.; Alaqeel, L.A.; Khormy, R.M.; Alkubaish, Z.A.; Hussain, T. Explainable
Classification Model for Android Malware Analysis Using API and Permission-Based Features. CMC-Comput. Mater. Contin.
2023, 76, 3167–3188. [CrossRef]
Appl. Sci. 2024, 14, 8884 107 of 111
436. Shin, C.Y.; Park, J.T.; Baek, U.J.; Kim, M.S. A Feasible and Explainable Network Traffic Classifier Utilizing DistilBERT. IEEE Access
2023, 11, 70216–70237. [CrossRef]
437. Samir, M.; Sherief, N.; Abdelmoez, W. Improving Bug Assignment and Developer Allocation in Software Engineering through
Interpretable Machine Learning Models. Computers 2023, 12, 128. [CrossRef]
438. Guidotti, R.; D’Onofrio, M. Matrix Profile-Based Interpretable Time Series Classifier. Front. Artif. Intell. 2021, 4, 699448. [CrossRef]
439. Ekanayake, I.U.; Palitha, S.; Gamage, S.; Meddage, D.P.P.; Wijesooriya, K.; Mohotti, D. Predicting adhesion strength of
micropatterned surfaces using gradient boosting models and explainable artificial intelligence visualizations. Mater. Today
Commun. 2023, 36, 106545. [CrossRef]
440. Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining
useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [CrossRef]
441. Bitar, A.; Rosales, R.; Paulitsch, M. Gradient-based feature-attribution explainability methods for spiking neural networks. Front.
Neurosci. 2023, 17, 1153999. [CrossRef] [PubMed]
442. Kim, H.; Kim, J.S.; Chung, C.K. Identification of cerebral cortices processing acceleration, velocity, and position during directional
reaching movement with deep neural network and explainable AI. Neuroimage 2023, 266, 119783. [CrossRef]
443. Khondker, A.; Kwong, J.C.C.; Rickard, M.; Skreta, M.; Keefe, D.T.; Lorenzo, A.J.; Erdman, L. A machine learning-based approach
for quantitative grading of vesicoureteral reflux from voiding cystourethrograms: Methods and proof of concept. J. Pediatr. Urol.
2022, 18, 78.e1–78.e7. [CrossRef]
444. Lucieri, A.; Dengel, A.; Ahmed, S. Translating theory into practice: Assessing the privacy implications of concept-based
explanations for biomedical AI. FRontiers Bioinform. 2023, 3, 1194993. [CrossRef] [PubMed]
445. Suhail, S.; Iqbal, M.; Hussain, R.; Jurdak, R. ENIGMA: An explainable digital twin security solution for cyber-physical systems.
Comput. Ind. 2023, 151, 103961. [CrossRef]
446. Bacco, L.; Cimino, A.; Dell’Orletta, F.; Merone, M. Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive
Summarization Approach. Electronics 2021, 10, 2195. [CrossRef]
447. Prakash, A.J.; Patro, K.K.; Saunak, S.; Sasmal, P.; Kumari, P.L.; Geetamma, T. A New Approach of Transparent and Explainable
Artificial Intelligence Technique for Patient-Specific ECG Beat Classification. IEEE Sens. Lett. 2023, 7, 5501604. [CrossRef]
448. Alani, M.M.; Awad, A.I. PAIRED: An Explainable Lightweight Android Malware Detection System. IEEE Access 2022, 10,
73214–73228. [CrossRef]
449. Maloca, P.M.; Mueller, P.L.; Lee, A.Y.; Tufail, A.; Balaskas, K.; Niklaus, S.; Kaiser, P.; Suter, S.; Zarranz-Ventura, J.; Egan, C.;
et al. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial
intelligence. Commun. Biol. 2021, 4, 170. [CrossRef] [PubMed]
450. Ahn, I.; Gwon, H.; Kang, H.; Kim, Y.; Seo, H.; Choi, H.; Cho, H.N.; Kim, M.; Jun, T.J.; Kim, Y.H. Machine Learning-Based Hospital
Discharge Prediction for Patients with Cardiovascular Diseases: Development and Usability Study. JMIR Med. Inform. 2021,
9, e32662. [CrossRef]
451. Hammer, J.; Schirrmeister, R.T.; Hartmann, K.; Marusic, P.; Schulze-Bonhage, A.; Ball, T. Interpretable functional specialization
emerges in deep convolutional networks trained on brain signals. J. Neural Eng. 2022, 19, 036006. [CrossRef]
452. Ikushima, H.; Usui, K. Identification of age-dependent features of human bronchi using explainable artificial intelligence. ERJ
Open Res. 2023, 9. [CrossRef]
453. Kalir, A.A.; Lo, S.K.; Goldberg, G.; Zingerman-Koladko, I.; Ohana, A.; Revah, Y.; Chimol, T.B.; Honig, G. Leveraging Machine
Learning for Capacity and Cost on a Complex Toolset: A Case Study. IEEE Trans. Semicond. Manuf. 2023, 36, 611–618. [CrossRef]
454. Shin, H.; Noh, G.; Choi, B.M. Photoplethysmogram based vascular aging assessment using the deep convolutional neural
network. Sci. Rep. 2022, 12, 11377. [CrossRef] [PubMed]
455. Chandra, H.; Pawar, P.M.; Elakkiya, R.; Tamizharasan, P.S.; Muthalagu, R.; Panthakkan, A. Explainable AI for Soil Fertility
Prediction. IEEE Access 2023, 11, 97866–97878. [CrossRef]
456. Blix, K.; Ruescas, A.B.; Johnson, J.E.; Camps-Valls, G. Learning Relevant Features of Optical Water Types. IEEE Geosci. Remote
Sens. Lett. 2022, 19, 1502105. [CrossRef]
457. Topp, S.N.; Barclay, J.; Diaz, J.; Sun, A.Y.; Jia, X.; Lu, D.; Sadler, J.M.; Appling, A.P. Stream Temperature Prediction in a Shifting
Environment: Explaining the Influence of Deep Learning Architecture. Water Resour. Res. 2023, 59, e2022WR033880. [CrossRef]
458. Till, T.; Tschauner, S.; Singer, G.; Lichtenegger, K.; Till, H. Development and optimization of AI algorithms for wrist fracture
detection in children using a freely available dataset. Front. Pediatr. 2023, 11, 1291804. [CrossRef] [PubMed]
459. Aswad, F.M.; Kareem, A.N.; Khudhur, A.M.; Khalaf, B.A.; Mostafa, S.A. Tree-based machine learning algorithms in the Internet
of Things environment for multivariate flood status prediction. J. Intell. Syst. 2022, 31, 1–14. [CrossRef]
460. Ghosh, I.; Alfaro-Cortes, E.; Gamez, M.; Garcia-Rubio, N. Modeling hydro, nuclear, and renewable electricity generation in India:
An atom search optimization-based EEMD-DBSCAN framework and explainable AI. Heliyon 2024, 10, e23434. [CrossRef]
461. Mohanrajan, S.N.; Loganathan, A. Novel Vision Transformer-Based Bi-LSTM Model for LU/LC Prediction-Javadi Hills, India.
Appl. Sci. 2022, 12, 6387. [CrossRef]
462. Zhang, L.; Bibi, F.; Hussain, I.; Sultan, M.; Arshad, A.; Hasnain, S.; Alarifi, I.M.; Alamir, M.A.; Sajjad, U. Evaluating the
Stress-Strain Relationship of the Additively Manufactured Lattice Structures. Micromachines 2023, 14, 75. [CrossRef] [PubMed]
463. Wang, H.; Doumard, E.; Soule-Dupuy, C.; Kemoun, P.; Aligon, J.; Monsarrat, P. Explanations as a New Metric for Feature Selection:
A Systematic Approach. IEEE J. Biomed. Health Inform. 2023, 27, 4131–4142. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 108 of 111
464. Pierrard, R.; Poli, J.P.; Hudelot, C. Spatial relation learning for explainable image classification and annotation in critical
applications. Artif. Intell. 2021, 292, 103434. [CrossRef]
465. Praetorius, J.P.; Walluks, K.; Svensson, C.M.; Arnold, D.; Figge, M.T. IMFSegNet: Cost-effective and objective quantification of
intramuscular fat in histological sections by deep learning. Comput. Struct. Biotechnol. J. 2023, 21, 3696–3704. [CrossRef] [PubMed]
466. Pan, S.; Hoque, S.; Deravi, F. An Attention-Guided Framework for Explainable Biometric Presentation Attack Detection. Sensors
2022, 22, 3365. [CrossRef] [PubMed]
467. Wang, Y.; Huang, M.; Deng, H.; Li, W.; Wu, Z.; Tang, Y.; Liu, G. Identification of vital chemical information via visualization of
graph neural networks. Briefings Bioinform. 2023, 24, bbac577. [CrossRef] [PubMed]
468. Naser, M.Z. CLEMSON: An Automated Machine-Learning Virtual Assistant for Accelerated, Simulation-Free, Transparent,
Reduced-Order, and Inference-Based Reconstruction of Fire Response of Structural Members. J. Struct. Eng. 2022, 148, 04022120.
[CrossRef]
469. Karamanou, A.; Brimos, P.; Kalampokis, E.; Tarabanis, K. Exploring the Quality of Dynamic Open Government Data Using
Statistical and Machine Learning Methods. Sensors 2022, 22, 9684. [CrossRef]
470. Kim, T.; Kwon, S.; Kwon, Y. Prediction of Wave Transmission Characteristics of Low-Crested Structures with Comprehensive
Analysis of Machine Learning. Sensors 2021, 21, 8192. [CrossRef]
471. Gong, H.; Wang, M.; Zhang, H.; Elahe, M.F.; Jin, M. An Explainable AI Approach for the Rapid Diagnosis of COVID-19 Using
Ensemble Learning Algorithms. Front. Public Health 2022, 10, 874455. [CrossRef] [PubMed]
472. Burzynski, D. Useful energy prediction model of a Lithium-ion cell operating on various duty cycles. Eksploat. -Niezawodn.-Maint.
Reliab. 2022, 24, 317–329. [CrossRef]
473. Kim, D.; Ho, C.H.; Park, I.; Kim, J.; Chang, L.S.; Choi, M.H. Untangling the contribution of input parameters to an artificial
intelligence PM2.5 forecast model using the layer-wise relevance propagation method. Atmos. Environ. 2022, 276, 119034.
[CrossRef]
474. Galiger, G.; Bodo, Z. Explainable patch-level histopathology tissue type detection with bag-of-local-features models and data
augmentation. ACTA Univ. Sapientiae Inform. 2023, 15, 60–80. [CrossRef]
475. Naeem, H.; Dong, S.; Falana, O.J.; Ullah, F. Development of a deep stacked ensemble with process based volatile memory
forensics for platform independent malware detection and classification. Expert Syst. Appl. 2023, 223, 119952. [CrossRef]
476. Uddin, M.Z.; Soylu, A. Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-
based neural structured learning. Sci. Rep. 2021, 11, 16455. [CrossRef] [PubMed]
477. Sinha, A.; Das, D. XAI-LCS: Explainable AI-Based Fault Diagnosis of Low-Cost Sensors. IEEE Sens. Lett. 2023, 7, 6009304.
[CrossRef]
478. Jacinto, M.V.G.; Neto, A.D.D.; de Castro, D.L.; Bezerra, F.H.R. Karstified zone interpretation using deep learning algorithms:
Convolutional neural networks applications and model interpretability with explainable AI. Comput. Geosci. 2023, 171, 105281.
[CrossRef]
479. Jakubowski, J.; Stanisz, P.; Bobek, S.; Nalepa, G.J. Anomaly Detection in Asset Degradation Process Using Variational Autoencoder
and Explanations. Sensors 2022, 22, 291. [CrossRef]
480. Guo, C.; Zhao, Z.; Ren, J.; Wang, S.; Liu, Y.; Chen, X. Causal explaining guided domain generalization for rotating machinery
intelligent fault diagnosis. Expert Syst. Appl. 2024, 243, 122806. [CrossRef]
481. Shi, X.; Keenan, T.D.L.; Chen, Q.; De Silva, T.; Thavikulwat, A.T.; Broadhead, G.; Bhandari, S.; Cukras, C.; Chew, E.Y.; Lu, Z.
Improving Interpretability in Machine Diagnosis Detection of Geographic Atrophy in OCT Scans. Ophthalmol. Sci. 2021, 1, 100038.
[CrossRef]
482. Panos, B.; Kleint, L.; Zbinden, J. Identifying preflare spectral features using explainable artificial intelligence. Astron. Astrophys.
2023, 671, A73. [CrossRef]
483. Fang, H.; Shao, Y.; Xie, C.; Tian, B.; Shen, C.; Zhu, Y.; Guo, Y.; Yang, Y.; Chen, G.; Zhang, M. A New Approach to Spatial
Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence. Sustainability 2023, 15,
3094. [CrossRef]
484. Karami, H.; Derakhshani, A.; Ghasemigol, M.; Fereidouni, M.; Miri-Moghaddam, E.; Baradaran, B.; Tabrizi, N.J.; Najafi, S.;
Solimando, A.G.; Marsh, L.M.; et al. Weighted Gene Co-Expression Network Analysis Combined with Machine Learning
Validation to Identify Key Modules and Hub Genes Associated with SARS-CoV-2 Infection. J. Clin. Med. 2021, 10, 3567. [CrossRef]
485. Baek, M.; Kim, S.B. Failure Detection and Primary Cause Identification of Multivariate Time Series Data in Semiconductor
Equipment. IEEE Access 2023, 11, 54363–54372. [CrossRef]
486. Nguyen, P.X.; Tran, T.H.; Pham, N.B.; Do, D.N.; Yairi, T. Human Language Explanation for a Decision Making Agent via
Automated Rationale Generation. IEEE Access 2022, 10, 110727–110741. [CrossRef]
487. Shahriar, S.M.; Bhuiyan, E.A.; Nahiduzzaman, M.; Ahsan, M.; Haider, J. State of Charge Estimation for Electric Vehicle Battery
Management Systems Using the Hybrid Recurrent Learning Approach with Explainable Artificial Intelligence. Energies 2022,
15, 8003. [CrossRef]
488. Kim, D.; Handayani, M.P.; Lee, S.; Lee, J. Feature Attribution Analysis to Quantify the Impact of Oceanographic and Maneuver-
ability Factors on Vessel Shaft Power Using Explainable Tree-Based Model. Sensors 2023, 23, 1072. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 109 of 111
489. Lemanska-Perek, A.; Krzyzanowska-Golab, D.; Kobylinska, K.; Biecek, P.; Skalec, T.; Tyszko, M.; Gozdzik, W.; Adamik, B.
Explainable Artificial Intelligence Helps in Understanding the Effect of Fibronectin on Survival of Sepsis. Cells 2022, 11, 2433.
[CrossRef] [PubMed]
490. Minutti-Martinez, C.; Escalante-Ramirez, B.; Olveres-Montiel, J. PumaMedNet-CXR: An Explainable Generative Artificial
Intelligence for the Analysis and Classification of Chest X-Ray Images. Comput. Y Sist. 2023, 27, 909–920. [CrossRef]
491. Kim, T.; Moon, N.H.; Goh, T.S.; Jung, I.D. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via
explainable artificial intelligence. Sci. Rep. 2023, 13, 10415. [CrossRef] [PubMed]
492. Humer, C.; Heberle, H.; Montanari, F.; Wolf, T.; Huber, F.; Henderson, R.; Heinrich, J.; Streit, M. ChemInformatics Model Explorer
(CIME): Exploratory analysis of chemical model explanations. J. Cheminform. 2022, 14, 21. [CrossRef]
493. Zhang, K.; Zhang, J.; Xu, P.; Gao, T.; Gao, W. A multi-hierarchical interpretable method for DRL-based dispatching control in
power systems. Int. J. Electr. Power Energy Syst. 2023, 152, 109240. [CrossRef]
494. Yang, J.; Yue, Z.; Yuan, Y. Noise-Aware Sparse Gaussian Processes and Application to Reliable Industrial Machinery Health
Monitoring. IEEE Trans. Ind. Inform.S 2023, 19, 5995–6005. [CrossRef]
495. Cheng, F.; Liu, D.; Du, F.; Lin, Y.; Zytek, A.; Li, H.; Qu, H.; Veeramachaneni, K. VBridge: Connecting the Dots between Features
and Data to Explain Healthcare Models. IEEE Trans. Vis. Comput. Graph. 2022, 28, 378–388. [CrossRef]
496. Laqua, A.; Schnee, J.; Pletinckx, J.; Meywerk, M. Exploring User Experience in Sustainable Transport with Explainable AI Methods
Applied to E-Bikes. Appl. Sci. 2023, 13, 1277. [CrossRef]
497. Sanderson, J.; Mao, H.; Abdullah, M.A.M.; Al-Nima, R.R.O.; Woo, W.L. Optimal Fusion of Multispectral Optical and SAR Images
for Flood Inundation Mapping through Explainable Deep Learning. Information 2023, 14, 660. [CrossRef]
498. Abe, S.; Tago, S.; Yokoyama, K.; Ogawa, M.; Takei, T.; Imoto, S.; Fuji, M. Explainable AI for Estimating Pathogenicity of Genetic
Variants Using Large-Scale Knowledge Graphs. Cancers 2023, 15, 1118. [CrossRef] [PubMed]
499. Kerz, E.; Zanwar, S.; Qiao, Y.; Wiechmann, D. Toward explainable AI (XAI) for mental health detection based on language
behavior. Front. Psychiatry 2023, 14, 1219479. [CrossRef]
500. Kim, T.; Jeon, M.; Lee, C.; Kim, J.; Ko, G.; Kim, J.Y.; Youn, C.H. Federated Onboard-Ground Station Computing with Weakly
Supervised Cascading Pyramid Attention Network for Satellite Image Analysis. IEEE Access 2022, 10, 117315–117333. [CrossRef]
501. Thrun, M.C.; Ultsch, A.; Breuer, L. Explainable AI Framework for Multivariate Hydrochemical Time Series. Mach. Learn. Knowl.
Extr. 2021, 3, 170–204. [CrossRef]
502. Beni, T.; Nava, L.; Gigli, G.; Frodella, W.; Catani, F.; Casagli, N.; Gallego, J.I.; Margottini, C.; Spizzichino, D. Classification of
rock slope cavernous weathering on UAV photogrammetric point clouds: The example of Hegra (UNESCO World Heritage Site,
Kingdom of Saudi Arabia). Eng. Geol. 2023, 325, 107286. [CrossRef]
503. Zhou, R.; Zhang, Y. Predicting and explaining karst spring dissolved oxygen using interpretable deep learning approach. Hydrol.
Process. 2023, 37, e14948. [CrossRef]
504. Barros, J.; Cunha, F.; Martins, C.; Pedrosa, P.; Cortez, P. Predicting Weighing Deviations in the Dispatch Workflow Process: A
Case Study in a Cement Industry. IEEE Access 2023, 11, 8119–8135. [CrossRef]
505. Kayadibi, I.; Guraksin, G.E. An Explainable Fully Dense Fusion Neural Network with Deep Support Vector Machine for Retinal
Disease Determination. Int. J. Comput. Intell. Syst. 2023, 16, 28. [CrossRef]
506. Qamar, T.; Bawany, N.Z. Understanding the black-box: Towards interpretable and reliable deep learning models. Peerj Comput.
Sci. 2023, 9, e1629. [CrossRef] [PubMed]
507. Crespi, M.; Ferigo, A.; Custode, L.L.; Iacca, G. A population-based approach for multi-agent interpretable reinforcement learning.
Appl. Soft Comput. 2023, 147, 110758. [CrossRef]
508. Sabrina, F.; Sohail, S.; Farid, F.; Jahan, S.; Ahamed, F.; Gordon, S. An Interpretable Artificial Intelligence Based Smart Agriculture
System. CMC-Comput. Mater. Contin. 2022, 72, 3777–3797. [CrossRef]
509. Wu, J.; Wang, Z.; Dong, J.; Cui, X.; Tao, S.; Chen, X. Robust Runoff Prediction with Explainable Artificial Intelligence and
Meteorological Variables from Deep Learning Ensemble Model. Water Resour. Res. 2023, 59, e2023WR035676. [CrossRef]
510. Nakamura, K.; Uchino, E.; Sato, N.; Araki, A.; Terayama, K.; Kojima, R.; Murashita, K.; Itoh, K.; Mikami, T.; Tamada, Y.; et al.
Individual health-disease phase diagrams for disease prevention based on machine learning. J. Biomed. Inform. 2023, 144, 104448.
[CrossRef]
511. Oh, S.; Park, Y.; Cho, K.J.; Kim, S.J. Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation.
Diagnostics 2021, 11, 510. [CrossRef]
512. Borujeni, S.M.; Arras, L.; Srinivasan, V.; Samek, W. Explainable sequence-to-sequence GRU neural network for pollution
forecasting. Sci. Rep. 2023, 13, 9940. [CrossRef]
513. Alharbi, A.; Petrunin, I.; Panagiotakopoulos, D. Assuring Safe and Efficient Operation of UAV Using Explainable Machine
Learning. Drones 2023, 7, 327. [CrossRef]
514. Sheu, R.K.; Pardeshi, M.S.; Pai, K.C.; Chen, L.C.; Wu, C.L.; Chen, W.C. Interpretable Classification of Pneumonia Infection Using
eXplainable AI (XAI-ICP). IEEE Access 2023, 11, 28896–28919. [CrossRef]
515. Aslam, N.; Khan, I.U.; Aljishi, R.F.; Alnamer, Z.M.; Alzawad, Z.M.; Almomen, F.A.; Alramadan, F.A. Explainable Computational
Intelligence Model for Antepartum Fetal Monitoring to Predict the Risk of IUGR. Electronics 2022, 11, 593. [CrossRef]
516. Peng, P.; Zhang, Y.; Wang, H.; Zhang, H. Towards robust and understandable fault detection and diagnosis using denoising
sparse autoencoder and smooth integrated gradients. Isa Trans. 2022, 125, 371–383. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 110 of 111
517. Na Pattalung, T.; Ingviya, T.; Chaichulee, S. Feature Explanations in Recurrent Neural Networks for Predicting Risk of Mortality
in Intensive Care Patients. J. Pers. Med. 2021, 11, 934. [CrossRef] [PubMed]
518. Oliveira, F.R.D.S.; Neto, F.B.D.L. Method to Produce More Reasonable Candidate Solutions with Explanations in Intelligent
Decision Support Systems. IEEE Access 2023, 11, 20861–20876. [CrossRef]
519. Burgueno, A.M.; Aldana-Martin, J.F.; Vazquez-Pendon, M.; Barba-Gonzalez, C.; Jimenez Gomez, Y.; Garcia Millan, V.; Navas-
Delgado, I. Scalable approach for high-resolution land cover: A case study in the Mediterranean Basin. J. Big Data 2023, 10, 91.
[CrossRef]
520. Horst, F.; Slijepcevic, D.; Simak, M.; Horsak, B.; Schoellhorn, W.I.; Zeppelzauer, M. Modeling biological individuality using
machine learning: A study on human gait. Comput. Struct. Biotechnol. J. 2023, 21, 3414–3423. [CrossRef]
521. Napoles, G.; Hoitsma, F.; Knoben, A.; Jastrzebska, A.; Espinosa, M.L. Prolog-based agnostic explanation module for structured
pattern classification. Inf. Sci. 2023, 622, 1196–1227. [CrossRef]
522. Ni, L.; Wang, D.; Singh, V.P.; Wu, J.; Chen, X.; Tao, Y.; Zhu, X.; Jiang, J.; Zeng, X. Monthly precipitation prediction at regional scale
using deep convolutional neural networks. Hydrol. Process. 2023, 37, e14954. [CrossRef]
523. Amiri-Zarandi, M.; Karimipour, H.; Dara, R.A. A federated and explainable approach for insider threat detection in IoT. Internet
Things 2023, 24, 100965. [CrossRef]
524. Niu, Y.; Gu, L.; Zhao, Y.; Lu, F. Explainable Diabetic Retinopathy Detection and Retinal Image Generation. IEEE J. Biomed. Health
Inform. 2022, 26, 44–55. [CrossRef]
525. Kliangkhlao, M.; Limsiroratana, S.; Sahoh, B. The Design and Development of a Causal Bayesian Networks Model for the
Explanation of Agricultural Supply Chains. IEEE Access 2022, 10, 86813–86823. [CrossRef]
526. Dissanayake, T.; Fernando, T.; Denman, S.; Sridharan, S.; Ghaemmaghami, H.; Fookes, C. A Robust Interpretable Deep Learning
Classifier for Heart Anomaly Detection without Segmentation. IEEE J. Biomed. Health Inform. 2021, 25, 2162–2171. [CrossRef]
[PubMed]
527. Dastile, X.; Celik, T. Making Deep Learning-Based Predictions for Credit Scoring Explainable. IEEE Access 2021, 9, 50426–50440.
[CrossRef]
528. Khan, M.A.; Azhar, M.; Ibrar, K.; Alqahtani, A.; Alsubai, S.; Binbusayyis, A.; Kim, Y.J.; Chang, B. COVID-19 Classification from
Chest X-Ray Images: A Framework of Deep Explainable Artificial Intelligence. Comput. Intell. Neurosci. 2022, 2022, 4254631.
[CrossRef]
529. Moon, S.; Lee, H. JDSNMF: Joint Deep Semi-Non-Negative Matrix Factorization for Learning Integrative Representation of
Molecular Signals in Alzheimer’s Disease. J. Pers. Med. 2021, 11, 686. [CrossRef]
530. Kiefer, S.; Hoffmann, M.; Schmid, U. Semantic Interactive Learning for Text Classification: A Constructive Approach for
Contextual Interactions. Mach. Learn. Knowl. Extr. 2022, 4, 994–1010. [CrossRef]
531. Franco, D.; Oneto, L.; Navarin, N.; Anguita, D. Toward Learning Trustworthily from Data Combining Privacy, Fairness, and
Explainability: An Application to Face Recognition. Entropy 2021, 23, 1047. [CrossRef]
532. Montiel-Vazquez, E.C.; Uresti, J.A.R.; Loyola-Gonzalez, O. An Explainable Artificial Intelligence Approach for Detecting Empathy
in Textual Communication. Appl. Sci. 2022, 12, 9407. [CrossRef]
533. Mollas, I.; Bassiliades, N.; Tsoumakas, G. Truthful meta-explanations for local interpretability of machine learning models. Appl.
Intell. 2023, 53, 26927–26948. [CrossRef]
534. Juang, C.F.; Chang, C.W.; Hung, T.H. Hand Palm Tracking in Monocular Images by Fuzzy Rule-Based Fusion of Explainable
Fuzzy Features with Robot Imitation Application. IEEE Trans. Fuzzy Syst. 2021, 29, 3594–3606. [CrossRef]
535. Cicek, I.B.; Colak, C.; Yologlu, S.; Kucukakcali, Z.; Ozhan, O.; Taslidere, E.; Danis, N.; Koc, A.; Parlakpinar, H.; Akbulut, S.
Nephrotoxicity Development of a Clinical Decision Support System Based on Tree-Based Machine Learning Methods to Detect
Diagnostic Biomarkers from Genomic Data in Methotrexate-Induced Rats. Appl. Sci. 2023, 13, 8870. [CrossRef]
536. Jung, D.H.; Kim, H.Y.; Won, J.H.; Park, S.H. Development of a classification model for Cynanchum wilfordii and Cynanchum
auriculatum using convolutional neural network and local interpretable model-agnostic explanation technology. Front. Plant Sci.
2023, 14, 1169709. [CrossRef] [PubMed]
537. Rawal, A.; Kidchob, C.; Ou, J.; Yogurtcu, O.N.; Yang, H.; Sauna, Z.E. A machine learning approach for identifying variables
associated with risk of developing neutralizing antidrug antibodies to factor VIII. Heliyon 2023, 9, e16331. [CrossRef]
538. Yeung, C.; Ho, D.; Pham, B.; Fountaine, K.T.; Zhang, Z.; Levy, K.; Raman, A.P. Enhancing Adjoint Optimization-Based Photonic
Inverse Designwith Explainable Machine Learning. Acs Photonics 2022, 9, 1577–1585. [CrossRef]
539. Naeem, H.; Alshammari, B.M.; Ullah, F. Explainable Artificial Intelligence-Based IoT Device Malware Detection Mechanism
Using Image Visualization and Fine-Tuned CNN-Based Transfer Learning Model. Comput. Intell. Neurosci. 2022, 2022, 7671967.
[CrossRef]
540. Mey, O.; Neufeld, D. Explainable AI Algorithms for Vibration Data-Based Fault Detection: Use Case-Adadpted Methods and
Critical Evaluation. Sensors 2022, 22, 9037. [CrossRef]
541. Martinez, G.S.; Perez-Rueda, E.; Kumar, A.; Sarkar, S.; Silva, S.d.A.e. Explainable artificial intelligence as a reliable annotator of
archaeal promoter regions. Sci. Rep. 2023, 13, 1763. [CrossRef]
542. Nkengue, M.J.; Zeng, X.; Koehl, L.; Tao, X. X-RCRNet: An explainable deep-learning network for COVID-19 detection using ECG
beat signals. Biomed. Signal Process. Control. 2024, 87, 105424. [CrossRef]
Appl. Sci. 2024, 14, 8884 111 of 111
543. Behrens, G.; Beucler, T.; Gentine, P.; Iglesias-Suarez, F.; Pritchard, M.; Eyring, V. Non-Linear Dimensionality Reduction with
a Variational Encoder Decoder to Understand Convective Processes in Climate Models. J. Adv. Model. Earth Syst. 2022,
14, e2022MS003130. [CrossRef]
544. Fatahi, R.; Nasiri, H.; Dadfar, E.; Chelgani, S.C. Modeling of energy consumption factors for an industrial cement vertical roller
mill by SHAP-XGBoost: A “conscious lab” approach. Sci. Rep. 2022, 12, 7543. [CrossRef] [PubMed]
545. De Groote, W.; Kikken, E.; Hostens, E.; Van Hoecke, S.; Crevecoeur, G. Neural Network Augmented Physics Models for Systems
with Partially Unknown Dynamics: Application to Slider-Crank Mechanism. IEEE-ASME Trans. Mechatronics 2022, 27, 103–114.
[CrossRef]
546. Takalo-Mattila, J.; Heiskanen, M.; Kyllonen, V.; Maatta, L.; Bogdanoff, A. Explainable Steel Quality Prediction System Based on
Gradient Boosting Decision Trees. IEEE Access 2022, 10, 68099–68110. [CrossRef]
547. Jang, J.; Jeong, W.; Kim, S.; Lee, B.; Lee, M.; Moon, J. RAID: Robust and Interpretable Daily Peak Load Forecasting via Multiple
Deep Neural Networks and Shapley Values. Sustainability 2023, 15, 6951. [CrossRef]
548. Aishwarya, N.; Veena, M.B.; Ullas, Y.L.; Rajasekaran, R.T. “SWASTHA-SHWASA”: Utility of Deep Learning for Diagnosis of
Common Lung Pathologies from Chest X-rays. Int. J. Early Child. Spec. Educ. 2022, 14, 1895–1905. [CrossRef]
549. Kaczmarek-Majer, K.; Casalino, G.; Castellano, G.; Dominiak, M.; Hryniewicz, O.; Kaminska, O.; Vessio, G.; Diaz-Rodriguez, N.
PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries. Inf. Sci. 2022, 614, 374–399.
[CrossRef]
550. Bae, H. Evaluation of Malware Classification Models for Heterogeneous Data. Sensors 2024, 24, 288. [CrossRef]
551. Gerussi, A.; Verda, D.; Cappadona, C.; Cristoferi, L.; Bernasconi, D.P.; Bottaro, S.; Carbone, M.; Muselli, M.; Invernizzi, P.; Asselta,
R.; et al. LLM-PBC: Logic Learning Machine-Based Explainable Rules Accurately Stratify the Genetic Risk of Primary Biliary
Cholangitis. J. Pers. Med. 2022, 12, 1587. [CrossRef]
552. Li, B.M.; Castorina, V.L.; Hernandez, M.D.C.V.; Clancy, U.; Wiseman, S.J.; Sakka, E.; Storkey, A.J.; Garcia, D.J.; Cheng, Y.; Doubal,
F.; et al. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols. Front. Comput.
Neurosci. 2022, 16, 887633. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.