0% found this document useful (0 votes)
44 views111 pages

Applsci 14 08884

This systematic literature review analyzes 512 recent applications of explainable AI (XAI) published in peer-reviewed journals, focusing on their domains, techniques, and evaluation methods. Health-related applications, particularly in cancer diagnosis and COVID-19 management, were most prevalent, while a critical gap in robust evaluation metrics was identified. The review emphasizes the need for standardized evaluation frameworks and suggests future research directions to enhance trust and transparency in AI systems.

Uploaded by

Damero Palomino
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views111 pages

Applsci 14 08884

This systematic literature review analyzes 512 recent applications of explainable AI (XAI) published in peer-reviewed journals, focusing on their domains, techniques, and evaluation methods. Health-related applications, particularly in cancer diagnosis and COVID-19 management, were most prevalent, while a critical gap in robust evaluation metrics was identified. The review emphasizes the need for standardized evaluation frameworks and suggests future research directions to enhance trust and transparency in AI systems.

Uploaded by

Damero Palomino
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

applied

sciences
Systematic Review
Recent Applications of Explainable AI (XAI): A Systematic
Literature Review
Mirka Saarela 1, * and Vili Podgorelec 2

1 Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
2 Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia;
vili.podgorelec@um.si
* Correspondence: mirka.saarela@jyu.fi

Abstract: This systematic literature review employs the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable
AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web
of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being
recent, high-quality XAI application articles published in English—and were analyzed in detail.
Both qualitative and quantitative statistical techniques were used to analyze the identified articles:
qualitatively by summarizing the characteristics of the included studies based on predefined codes,
and quantitatively through statistical analysis of the data. These articles were categorized according
to their application domains, techniques, and evaluation methods. Health-related applications were
particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical
imaging. Other significant areas of application included environmental and agricultural management,
industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally,
emerging applications in law, education, and social care highlight XAI’s expanding impact. The
review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with
SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the
evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion
rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation
frameworks to ensure the reliability and effectiveness of XAI applications. Future research should
focus on developing comprehensive evaluation standards and improving the interpretability and
Citation: Saarela, M.; Podgorelec, V. stability of explanations. These advancements are essential for addressing the diverse demands of
Recent Applications of Explainable AI various application domains while ensuring trust and transparency in AI systems.
(XAI): A Systematic Literature Review.
Appl. Sci. 2024, 14, 8884. https:// Keywords: explainable artificial intelligence; applications; interpretable machine learning; convolutional
doi.org/10.3390/app14198884
neural network; deep learning; post-hoc explanations; model-agnostic explanations
Academic Editors: Douglas
O’Shaughnessy and Pedro Couto

Received: 4 August 2024


1. Introduction
Revised: 3 September 2024
Accepted: 25 September 2024 In recent decades, there has been a rapid surge in the development and widespread uti-
Published: 2 October 2024 lization of artificial intelligence (AI) and Machine Learning (ML). The complexity and scale
of these models have expanded in pursuit of improved predictive capabilities. However,
there is growing scrutiny directed towards the sole emphasis on model performance. This
approach often results in the creation of opaque, large-scale models, making it challenging
Copyright: © 2024 by the authors. for users to assess, comprehend, and potentially rectify the system’s decisions. Conse-
Licensee MDPI, Basel, Switzerland.
quently, there is a pressing need for interpretable and explainable AI (XAI), which aims to
This article is an open access article
enhance the comprehensibility of AI systems and their outputs for humans. The advent
distributed under the terms and
of deep learning over the past decade has intensified efforts to devise methodologies for
conditions of the Creative Commons
elucidating and interpreting these opaque systems [1–3].
Attribution (CC BY) license (https://
The literature on XAI is highly diverse, spanning multiple (sub-)disciplines [4], and has
creativecommons.org/licenses/by/
been growing at an exponential rate [5]. While numerous reviews have been published
4.0/).

Appl. Sci. 2024, 14, 8884. https://doi.org/10.3390/app14198884 https://www.mdpi.com/journal/applsci


Appl. Sci. 2024, 14, 8884 2 of 111

on XAI in general [1,2,5], there is a noticeable gap when it comes to in-depth analyses
focused specifically on XAI applications. Existing reviews predominantly explore founda-
tional concepts and theoretical advancements, but only a few concentrate on how XAI is
being applied across different domains. Although a few reviews on XAI applications do
exist [6–8], they have limitations in terms of the coverage period and the number of articles
reviewed. For instance, Hu et al. [6] published their review in 2021, thus excluding any
articles published thereafter. Additionally, they do not specify the total number of articles
reviewed, and their reference list includes only 70 articles. Similarly, Islam et al. [7] and
Saranya and Subhashini [8] reviewed 137 and 91 articles, respectively, but also focused on
earlier periods, leaving a gap in the literature regarding the latest XAI applications.
In contrast, our review fills this gap by providing a more comprehensive and up-
to-date synthesis of XAI applications, analyzing a significantly larger set of 512 recent
articles. Each article was thoroughly reviewed and categorized according to predefined
codes, enabling a systematic and detailed examination of current trends and developments
in XAI applications. This broader scope not only captures the latest advancements but also
offers a more thorough and nuanced overview than previous reviews, making it a valuable
resource for understanding the current landscape of XAI applications.
Given the rapid advancements and diverse applications of XAI, our research focuses
on addressing the following key questions:
• Domains: what are the most common domains of recent XAI applications, and what
are emerging XAI domains?
• Techniques: Which XAI techniques are utilized? How do these techniques vary based
on the type of data used, and in what forms are the explanations presented?
• Evaluation: How is explainability measured? Are specific metrics or evaluation
methods employed?
The remainder of this review is structured as follows: In Section 2, we provide a brief
overview of XAI taxonomies. Section 3 details the process used to identify relevant recent
XAI application articles, along with our coding and review procedures. Section 4 presents
the findings, highlighting the most common and emerging XAI application domains,
the techniques employed based on data type, and a summary of how the different XAI
explanations were evaluated. Finally, in Section 5, we discuss our findings in the context of
our research questions and suggest directions for future research.

2. Background: XAI Taxonomies


The primary focus of this review is on the recent applications of XAI across various
domains. However, to fully appreciate how XAI has been implemented in these areas, it
is essential to provide a brief overview of the key taxonomies of XAI methods. While an
exhaustive discussion of these taxonomies, along with the advantages and disadvantages
of each method, lies beyond the scope of this article, a concise summary is necessary to
ensure that the content and findings of this review are accessible to a broad audience.
For those seeking a more comprehensive exploration of XAI taxonomies and detailed
discussions on the pros and cons of various XAI methods, we recommend consulting recent
reviews [5,9–11] and comprehensive books on the subject [12,13].
Generally, XAI methods can be categorized based on their explanation mechanisms,
which may rely on examples [14–16], counterfactuals [17], hidden semantics [18], rules [19–21],
or features/attributions/saliency [22–25]. Among these, feature importances are the most
common explanation for classification models [26]. Feature importances leverage scoring
and ranking of features to quantify and enhance the interpretability of a model, thereby
explaining its behavior [27]. In cases where the model is trained on images, leading to
features representing super pixels, methods such as saliency maps or pixel attribution are
employed. Evaluating the saliency of features aids in ranking their explanatory power,
applicable for both feature selection and post-hoc explainability [5,28,29].
Other approaches to categorizing XAI methods are related to the techniques applied,
such as (i) ante-hoc versus post-hoc, (ii) global versus local, and (iii) model-specific versus
Appl. Sci. 2024, 14, 8884 3 of 111

model-agnostic (see Figure 1). Ante-hoc/intrinsic XAI methods encompass techniques that
are inherently transparent, often due to their simplistic structures, such as linear regres-
sion models. Conversely, post-hoc methods elucidate a model’s reasoning retrospectively,
following its training phase [5,26,30]. Moreover, distinctions are made between local and
global explanations: while modular global explanations provide an overarching interpre-
tation of the entire model, addressing it comprehensively, local explanations elucidate
specific observations, such as individual images [31,32]. Furthermore, explanation tech-
niques may be categorized as model-specific, relying on aspects of the particular model,
or model-agnostic, applicable across diverse models [5,33]. Model-agnostic techniques
can be further categorized into perturbation- or occlusion-based versus gradient-based.
Techniques like occlusion- or perturbation-based methods manipulate sections of input
features or images to generate explanations, while gradient-based methods compute the
gradient of prediction (or classification score) concerning input features [34].

XAI

Stage Scope Applicability

Ante-hoc Post-hoc Global Local Specific Agnostic

Evaluation

Domain Expert Anecdotal Evidence Explainability Metrics

Figure 1. Overview of different XAI approaches and evaluation methods. These categories were used
to classify the XAI application papers reviewed in this study.

As with machine learning models themselves, there is no universally best XAI ap-
proach; the optimal technique depends on factors such as the nature of the data, the specific
application, and the characteristics of the underlying AI model. For instance, local ex-
planations are particularly useful when seeking insights into specific instances, such as
identifying the reasons behind false positives in a model’s predictions [35]. In cases where
the AI model is inherently complex, post-hoc techniques may be necessary to provide
explanations, with some methods, like those relying on gradients, being applicable only
to specific models, such as neural networks with differentiable layers [34,36]. While a
variety of XAI methods are available, evaluating their effectiveness remains a less-explored
area [4,11]. As illustrated in Figure 1, XAI evaluation approaches can be categorized into
consultations with human experts, anecdotal evidence, and quantitative metrics.
As explained above, our review extends existing work on XAI methods and tax-
onomies [5,9–11] by shifting the focus towards the practical applications of XAI across
various domains. In the next section, we will describe how we used the categorizations in
Figure 1 to classify the recent XAI application papers in our review.

3. Research Methodology
Based on the research questions posed in Section 1 and the different taxonomies of
XAI described in Section 2, we initiated our systematic review on recent applications of
XAI. To collect the relevant publications for this review, we followed the analytical protocol
of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA)
Appl. Sci. 2024, 14, 8884 4 of 111

guidelines [37]. A systematic review “is a review of a clearly formulated question that uses
systematic and explicit methods to identify, select, and critically appraise relevant research,
and to collect and analyze data from the studies that are included in the review” [12].
According to the PRISMA guidelines, our evaluation consisted of several stages: defining
eligibility criteria, defining information sources, presenting the search strategy, specifying
the selection process, data collection process, data item selection, studying the risk of bias
assessment, specifying effect measures, describing the synthesis methods, reporting bias,
and certainty assessment [37].
Information sources and search strategy: The search was conducted in February 2024
on Web of Science (WoS) by using the following Boolean search string on the paper topic
(note that searches for topic terms in WoS search the following fields within a record: Title,
Abstract, Author Keywords, Keywords Plus): TS = ((“explainable artificial intelligence”
OR XAI) AND (application* OR process*)). The asterisk (*) at the end of a keyword ensures
the inclusion of the term in both singular and plural forms and its derivatives. The search
was limited to English-language non-review articles published between 1 January 2021 and
20 February 2024 (the search results can be found here: https://www.webofscience.com/
wos/woscc/summary/495b659d-8f9e-4b77-8671-2fac26682231-cda1ce8b/relevance/1, ac-
cessed on 24 September 2024). We exclusively used WoS due to its authoritative status and
comprehensive coverage. Birkle et al. (2020) [38] highlight WoS as the world’s oldest and
most widely used research database, ensuring reliable and high-quality data. Its extensive
discipline coverage and advanced citation indexing make it ideal for identifying influential
works and mapping research trends [38].
Eligibility criteria and selection process: The literature selection process flow chart is
summarized in Figure 2. The database search produced 664 papers. After removing non-
English articles (n = 4), 660 were eligible for the full-text review and screening. During
the full-text screening, we implemented the inclusion and exclusion criteria (Table 1)
established through iterative discussions among the two authors. The reviewers assessed
each article under the inclusion and exclusion criteria, with 512 research articles meeting
the inclusion criteria and being incorporated into the evaluation procedure.

Table 1. Inclusion and exclusion criteria for the review of recent applications of XAI.

Criterion Included Excluded


Other languages, such as German,
Language English
Chinese, and Spanish.
Book chapters, conference papers,
Publication type Peer-reviewed journal articles magazine articles, reports, theses,
and other gray literature.
Recentness Recent papers published in 2021 or after Papers published before 2021.
Papers that generally described
XAI or reviewed other works
Study content Application of XAI methods
without describing any
XAI applications.
Papers that were exceptionally
short (less than six pages) or those
that did not fulfill the basic
Quality Papers of sufficient quality
requirements for a publication
channel (e.g., be peer-reviewed,
have an international board [35]).

As reported in Figure 2, five articles were not retrievable from our universities’ net-
works, and 143 were excluded because they did not meet our inclusion criteria (primarily
because they introduced general XAI taxonomies or new methods without describing
specific XAI applications). Consequently, 512 articles remained for data extraction and
synthesis. For reasons of reproducibility, the entire list of included articles is attached
Appl. Sci. 2024, 14, 8884 5 of 111

in Table A1, along with the XAI application and the reason(s) why the authors say that
explainability is essential in their domain.

Figure 2. PRISMA flow chart of the study selection process.

Data collection process, data items, study risk of bias assessment, effect measures, synthesis
methods, and reporting bias assessment: To categorize and summarize the included articles in
this review, the first author developed a Google Survey that was filled out for each selected
article. The survey included both categorical (multiple-choice) and open-ended questions
designed to systematically categorize the key aspects of the research. This approach ensured
a consistent and comprehensive analysis across all articles. The survey provided an Excel
file with all responses, simplifying the analysis process.
Each reviewer assessed their allocated articles using the predefined codes and survey
questions created by the first author. In cases of uncertainty regarding the classification
of an article, reviewers noted the ambiguity, and these articles, along with their tentative
classifications, were discussed collectively among both authors to reach a consensus. This
discussion was conducted in an unbiased manner to ensure accurate classifications. While
no automated tools were used for the review process, Python libraries were employed for
quantitative assessment.
Some of the developed codes (survey questions) were as follows:
• What was the main application domain, and what was the specific application?
• In what form (such as rules, feature importance, counterfactual) was the explana-
tion created?
• Did the authors use intrinsically explainable models or post-hoc explainability, and did
they focus on global or local explanations?
• How was the quality of the explanation(s) evaluated?
• What did the authors say about why the explainability of their specific application is
important? (Open-ended question.)
Appl. Sci. 2024, 14, 8884 6 of 111

After completing the coding process and filling out the survey for each included article,
we synthesized the data using both qualitative and quantitative techniques to address our
research questions [39]. Qualitatively, we summarized the characteristics of the included
studies based on the predefined codes. Quantitatively, we performed statistical analysis of
the data, utilizing Python 3.11.5 to extract statistics from the annotated Excel table. This
combination of qualitative and quantitative approaches, along with collaborative efforts,
ensured the reliability and accuracy of our review process.
To assess the risk of reporting bias, we examined the completeness and transparency of
the data reported in each article, focusing on the availability of results related to our prede-
fined research questions. Articles that lacked essential data or failed to report key outcomes
were flagged for potential bias, and this was considered during the certainty assessment.
Certainty assessment: Regarding the quality of the articles, potential bias, and the cer-
tainty of their evidence, we followed the general recommendations [40] and included only
articles for which at least seven out of the ten quality questions proposed by Kitchenham
and Charters (2007) [39] could be answered affirmatively. Additionally, we ensured qual-
ity by selecting only articles published in prestigious journals that adhere to established
academic standards, such as being peer-reviewed and having an international editorial
board [35].
Table 2 reports the number of publications per journal for the ten journals with the
highest publication counts in our sample. As shown in the table, IEEE Access has the
highest number of publications, totaling 45, which represents 8.79% of our sample of
articles on recent XAI applications. It is followed by this journal (Applied Sciences-Basel)
with 37 publications (7.23%) and Sensors with 28 publications (5.47%).

Table 2. Number of publications for the ten journals with the highest publication counts in our
sample of articles on recent XAI applications.

Journal # of Publications
IEEE Access 45
Applied Sciences-Basel 37
Sensors 28
Scientific Reports 15
Electronics 14
Remote Sensing 8
Diagnostics 7
Information 7
Machine Learning And Knowledge Extraction 7
Sustainability 7

4. Results
In this section, we present the results of the 512 recent XAI application articles that
met our inclusion and quality criteria. As detailed in Section 3, we included only those
articles that satisfied our rigorous standards and were not flagged for bias. Once the articles
passed our inclusion criteria and were coded and analyzed, we did not conduct further
assessments of potential bias within the study results themselves. Our analysis relied on
quantitative summary statistics and qualitative summaries derived from these high-quality
articles. The complete list of these articles is provided in Table A1, along with their specific
XAI applications and the authors’ justifications for the importance of explainability in
their respective domains. Next, we provide an overview of recent XAI applications by
summarizing the findings from these 512 included articles.
Appl. Sci. 2024, 14, 8884 7 of 111

4.1. Application Domains


As shown in Figure 3, the absolute majority of recent XAI applications are from the
health domain. For instance, several works have focused on different kinds of cancer predic-
tion and diagnosis, such as skin cancer detection and classification [32,41,42], breast cancer
prediction [43–45], prostate cancer management and prediction [46,47], lung cancer (relapse)
prediction [48,49], and ovarian cancer classification and surgery decision-making [50,51]. In
response to the COVID-19 pandemic, significant research has been directed toward using
medical imaging for detecting COVID-19 [52], predicting the need for ICU admission for
COVID-19 patients [53], diagnosing COVID-19 using chest X-ray images [54], predicting
COVID-19 [55–60], COVID-19 data classification [61], assessment of perceived stress in
healthcare professionals attending COVID-19 [62], and COVID-19 forecasting [58].

Figure 3. Main XAI application domain of the studies in our corpus (including all the main domains
mentioned in at least three papers).

Medical imaging and diagnostic applications are also prominent, including detecting
paratuberculosis from histopathological images [63], predicting coronary artery disease
from myocardial perfusion images [64], diagnosis and surgery [65], identifying reasons for
MRI scans in multiple sclerosis patients [66], detecting the health status of neonates [67],
spinal postures [68], and chronic wound classification [69]. Additionally, studies have
focused on age-related macular degeneration detection [70], predicting immunological
age [71], cognitive health assessment [72,73], cardiovascular medicine [74,75], glaucoma
prediction and diagnosis [76–78], as well as predicting diabetes [79–82] and classifying
arrhythmia [83,84].
General management applications in healthcare include predicting patient outcomes
in ICU [60], functional work ability prediction [85], a decision support system for nutrition-
related geriatric syndromes [86], predicting hospital admissions for cancer patients [87],
medical data management [88], medical text processing [89], ML model development in
medicine [90], pain recognition [91], drug response prediction [92,93], face mask detec-
tion [94], and studying the sustainability of smart technology applications in healthcare [95].
Lastly, studies about tracing food behaviors [96], aspiration detection in flexible endoscopic
evaluation of swallowing [97], human activity recognition [98], human lower limb activity
recognition [99], factors influencing hearing aid use [100], predicting chronic obstructive
pulmonary disease [101], and assessing developmental status in children [102] underline
the diverse use of XAI in the health domain.
It is also noteworthy that brain and neuroscience studies have frequently been the
main application (Figure 3), often related to health. For example, Alzheimer’s disease clas-
sification and prediction have been major areas of focus [103–109], and Parkinson’s disease
diagnosis has been extensively studied [110–113]. There is also significant research on brain
tumor diagnosis and localization [114–118], predicting brain hemorrhage [119], cognitive
neuroscience development [120], and detecting and explaining autism spectrum disor-
der [121]. Other notable brain studies include the detection of epileptic seizures [122,123],
Appl. Sci. 2024, 14, 8884 8 of 111

predicting the risk of brain metastases in patients with lung cancer [124], and automating
skull stripping from brain magnetic resonance images [125]. Similarly, three pharmacy stud-
ies are related to health, including metabolic stability and CYP inhibition prediction [126]
and drug repurposing [127,128].
In the field of environmental and agricultural applications, various studies have uti-
lized XAI techniques for a wide range of purposes. For instance, earthquake-related studies
have focused on predicting an earthquake [129] and assessing the spatial probability of
earthquake impacts [130]. In the area of water resources and climate analysis, research has
been conducted on groundwater quality monitoring [131], predicting ocean circulation
regimes [132], water resources management through snowmelt-driven streamflow predic-
tion [133], and analyzing the impact of land cover changes on climate [134]. Additionally,
studies have addressed predicting spatiotemporal distributions of lake surface temperature
in the Great Lakes [135] and soil moisture prediction [136]. Environmental monitoring
and resource management applications also include predicting heavy metals in ground-
water [137], detection and quantification of isotopes using gamma-ray spectroscopy [138],
and recognizing bark beetle-infested forest areas [139]. Agricultural applications have
similarly leveraged XAI techniques for plant breeding [140], disease detection in agricul-
ture [141], diagnosis of plant stress [142], prediction of nitrogen requirements in rice [143],
grape leaf disease identification [144], and plant genomics [145].
Urban and industrial applications are also prominent, with studies on urban growth
modeling and prediction [146], building energy performance benchmarking [147], and opti-
mization of membraneless microfluidic fuel cells for energy production [148]. Furthermore,
predicting product gas composition and total gas yield [149], wastewater treatment [150],
and the prediction of undesirable events in oil wells [151] have been significant areas of
research. Lastly, environmental studies have also focused on predicting drought conditions
in the Canadian prairies [152].
In the manufacturing sector, XAI techniques have been employed for a variety of
predictive and diagnostic tasks. For instance, research has focused on prognostic lifetime
estimation of turbofan engines [153], fault prediction in 3D printers [154], and modeling
hydrocyclone performance [155]. Moreover, the prediction and monitoring of various
manufacturing processes have seen substantial research efforts. These include predictive
process monitoring [156,157], average surface roughness prediction in smart grinding pro-
cesses [158], and predictive maintenance in manufacturing systems [159]. Additionally,
modeling refrigeration system performance [160] and thermal management in manufac-
turing processes [161] have been explored. Concrete-related studies include predicting
the strength characteristics of concrete [162] and the identification of concrete cracks [163].
In the realm of industrial optimization and fault diagnosis, research has addressed the
intelligent system fault diagnosis of the robotic strain wave gear reducer [164] and the
optimization of injection molding processes [165]. The prediction of pentane content [166]
and the hot rolling process in the steel industry [167] have also been areas of focus. Studies
have further examined job cycle time [168] and yield prediction [169].
In the realm of security and defense, XAI techniques have been widely applied to
enhance cybersecurity measures. Several studies have focused on intrusion detection
systems [170–172], as well as trust management within these systems [173]. Research has
also explored detecting vulnerabilities in source code [174]. Cybersecurity applications
include general cybersecurity measures [175], the use of XAI methods in cybersecurity [176],
and specific studies on malware detection [177]. In the context of facial and voice recog-
nition and verification, XAI techniques have been employed for face verification [178]
and deepfake voice detection [179]. Additionally, research has addressed attacking ML
classifiers in EEG signal-based human emotion assessment systems using data poisoning
attacks [180]. Emerging security concerns in smart cities have led to studies on attack detec-
tion in IoT infrastructures [181]. Furthermore, aircraft detection from synthetic aperture
radar (SAR) imagery has been a significant area of research [182]. Social media monitoring
Appl. Sci. 2024, 14, 8884 9 of 111

for xenophobic content detection [183] and the broader applications of intrusion detection
and cybersecurity [184] highlight the diverse use of XAI in this domain.
In the finance sector, XAI techniques have been employed to enhance various decision-
making processes. Research has focused on decision-making in banking and finance sector
applications [185], asset pricing [186], and predicting credit card fraud [187]. Studies have
also aimed at predicting decisions to approve or reject loans [188] and addressing a range of
credit-related problems, including fraud detection, risk assessment, investment decisions,
algorithmic trading, and other financial decision-making processes [189]. Credit risk assess-
ment has been a significant area of research, with studies on credit risk assessment [190],
predicting loan defaults [191], and credit risk estimation [192,193]. The prediction and
recognition of financial crisis roots have been explored [194], alongside risk management
in insurance savings products [195]. Furthermore, time series forecasting and anomaly
detection have been important areas of study [196].
XAI has also been used for transportation and self-driving car applications, such as
the safety of self-driving cars [197], marine autonomous surface vehicle engineering [198],
autonomous vehicles for object detection and networking [199,200], and the development of
advanced driver-assistance systems [201]. Similarly, XAI offered support in retail and sales,
such as inventory management [202], on-shelf availability monitoring [203], predicting
online purchases based on information about online behavior [204], customer journey
mapping automation [205], and churn prediction [206,207].
In the field of education, XAI has been applied to various areas such as the early predic-
tion of student performance [208], predicting dropout rates in engineering faculties [209],
forecasting alumni income [210], and analyzing student agency [211]. In psychology, XAI
was used for classifying psychological traits from digital footprints [212]; in social care,
for child welfare screening [213]; and in the laws, for detecting reasons behind a judge’s
decision-making process [214], predicting withdrawal from the legal process in cases of vio-
lence towards women in intimate relationships [215], and inter partes institution outcomes
predictions [216]. In natural language processing, XAI was used for explaining sentence
embedding [217], question classification [218], questions answering [219], sarcasm detec-
tion in dialogues [220], identifying emotions from speech [221], assessment of familiarity
ratings for domain concepts [222], and detecting AI-generated text [223].
In entertainment, XAI was used, for example, for movie recommendations [224], ex-
plaining art [225], and different gaming applications, including analyzing and optimizing the
performance of agents in a game [226], deep Q-learning experience replay [227], and cheating
detection and player churn prediction [228]. Furthermore, several studies concentrated on
(social) media deceptive online content (such as fake news and deepfake images) detec-
tion [229–234]. In summary, the recent applications of XAI span a diverse array of domains,
reflecting its evolving scope; Figure 4 illustrates eight notable application areas.

Figure 4. Saliency maps of eight diverse recent XAI applications from various domains: brain
tumor classification [116], grape leaf disease identification [144], emotion detection [235], ripe status
recognition [141], volcanic localizations [236], traffic sign classification [237], cell segmentation [238],
and glaucoma diagnosis [77] (from top to bottom and left to right).
Appl. Sci. 2024, 14, 8884 10 of 111

4.2. XAI Methods


As shown in Figure 5, the majority of recent XAI papers used local explanations (53%),
or a combination of global and local explanations (29%).
SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Ex-
planations (LIME) are the most commonly used local XAI methods (Figure 6). While LIME
is fully model-agnostic, meaning it is independent of the prediction model and can be used
on top of any linear or non-linear model, the SHAP toolbox includes both model-agnostic
XAI tools (such as the SHAP Kernel Explainer) and model-specific XAI tools (such as the
TreeExplainer, which has been optimized for tree-based models [239]). However, LIME has
faced criticism for its instability, meaning the same inputs do not always result in the same
outputs [32], and its local approximation lacks a stable connection to the global level of the
model. In contrast, SHAP boasts four desirable properties: efficiency, symmetry, dummy,
and additivity [240], providing mathematical guarantees to address the local-to-global limi-
tation. These guarantees may explain SHAP’s higher popularity in recent XAI application
papers (Figure 6). Another local model-agnostic method used in recent XAI application
papers is Anchors, which belongs to the same XAI group as SHAP and LIME but is much
less popular in recent XAI application papers (e.g., [167,188,190,229,241]).

Figure 5. Number of papers in our corpus that used global versus local explanations.

While perturbation-based techniques, such as LIME (e.g., [65,175,187,242,243]) and SHAP


(e.g., [65,175,186,244,245]), are often the choices in recent XAI studies for tabular data, studies
involving images or other more complex data frequently use gradient-based techniques such
as Grad-CAM (e.g., [89,94,164,178,243]), Grad-CAM++ (e.g., [41,94,246–248]), SmoothGrad
(e.g., [246,249–252]), Integrated Gradients (e.g, [50,179,182,253,254]), or Layer-Wise Relevance
Propagation (LRP), such as those in [175,179,241,255,256]. Figure 4 shows eight examples of
saliency maps from image data of diverse recent XAI applications from various domains.
The most commonly used global model-agnostic techniques are Partial Dependence
Plots (PDP), such as those in [65,74,102,257,258], Accumulated Local Effects (ALE), as seen
in [136,157,258–260], and Permutation Importance (e.g., [74,136,137,156,180]). Conversely,
the most commonly used global intrinsically explainable methods are decision trees
(e.g., [88,91,183,191,261]) and logistic regression (e.g., [50,53,61,191,211]). It should be
noted that the latter two are used in countless other papers, but, given their inherent
interpretability, they are often not explicitly listed as XAI methods [31].
Appl. Sci. 2024, 14, 8884 11 of 111

Figure 6. Most common explanation techniques used in the papers in our corpus (only XAI techniques
used in at least five papers are shown).

4.3. ML Models and Tasks


Figure 7 represents the mostly used ML models in recent XAI papers (please note
that more than one ML model can be used in the same paper). Various neural network
models (predominantly deep NN) are mostly used ML models (used in 59% of papers),
followed by the tree-based modes (e.g., decision tree, random forest, gradient boosting,
used in 37% of papers), support vector machine (11%), linear or logistic regression (9%), K
nearest neighbor (4%), Bayesian-based models (3%), and Gaussian models (e.g., Gaussian
process regression and Gaussian mixture model, used in 2% of papers). The distribution of
the ML models used in the reviewed articles is comparable to what is generally used.
Besides the most common ML models, there are some others that are less used
and could therefore provide interesting alternative views on XAI. These include meth-
ods based on fuzzy logic (e.g., fuzzy rule-based classification [262], rule-based fuzzy
inference [226,263], fuzzy decision tree [264], fuzzy nonlinear programming [95]), graph-
based models (e.g., graph-deep NN [265,266], knowledge graph [267]), or some sort of
optimization with computational intelligence (e.g., particle swarm optimization [148,160],
clairvoyance optimization [268]).
The ML models have been used mainly for classification purposes (70%), followed
by regression (21%), clustering (4%) and reinforcement learning (1%), as can be seen
in Figure 8. Other tasks, which occurred in only one or at most two articles, include
segmentation [97,269], optimization [270,271], semi-supervised [272,273] or self-supervised
tasks [274], object detection [275], and novelty search [276].
There is no substantial difference between the major ML models with regard to the ML
task of their target application. The distributions of ML tasks for specific ML models (NN,
DT, LR, kNN, etc.) are all very similar to the overall one represented in Figure 8. Among all
major ML models, SVM stands out the most, which is used for classification somewhat
more often than the others (in 80% of cases).
With regard to the application domain, health, environment, industry, and security
and defense are among the top five domains for all the major ML models, with the only
exception being linear or logistic regression. In the case when linear or logistic regression
was used as an ML model, finance is among the top three application domains, which is
never the case for other major ML models. As finance is the second most used with the tree-
Appl. Sci. 2024, 14, 8884 12 of 111

based ML models, which, similarly to linear and logistic regression, can be characterized as
the most transparent and inherently interpretable models, it suggests that the users in the
financial domain are especially keen on getting insights and explanations on how the ML
models operate on their data.

neural network
tree-based model
support vector machine
linear/logistic regression
K nearest neighbor
ML Model

Bayesian
Gaussian
fuzzy logic-based
some optimization
graph-based
other
0 50 100 150 200 250 300
Count
Figure 7. Mostly used ML models in the papers in our corpus (only ML models used at least five
times are shown).

classification

regression
ML Task

clustering

reinforcement learning

other

0 50 100 150 200 250 300 350


Count
Figure 8. The main ML tasks in the papers in our corpus (all other ML tasks are used in only one or
at most two papers).

4.4. Intrinsically Explainable Models


As shown in Figure 9, the majority of recent XAI papers used post-hoc explainability
approaches on ML models, which are not naturally easily interpretable (79%), as opposed
to the intrinsically explainable models (12%); other papers (9%) reported a combination of
both. Figure 10 presents the distribution of intrinsically explainable ML models. From all
the reviewed XAI papers that reported their used method as intrinsically explainable,
the majority were tree-based (41%), followed by deep NN (19%), linear or logistic regression
(5%), and some Bayesian models (3%). The predominance of tree-based ML models could
have been expected, as well as a relatively high number of linear and logistic regression
models, which both are considered naturally transparent and simpler to understand, given
their inherent interpretability. On the other hand, the relatively high number of deep neural
networks that have been represented as intrinsically explainable is somewhat surprising.
Appl. Sci. 2024, 14, 8884 13 of 111

both
46 (9.0%)
post-hoc 403 (78.7%)
63 (12.3%)
intrinsically
explainable

Figure 9. Number of papers in our corpus that used a post-hoc approach versus intrinsically
explainable ML model.

tree-based
Intrinsically Explainable

deep NN
ML Model

linear/logistic
regression

Bayesian

other/specific

0 5 10 15 20 25
Count
Figure 10. Number of papers that used a specific ML model, which is presented as intrinsically
explainable.

There are significant differences between different ML models represented as intrinsi-


cally explainable with regard to the form of explanation they use. While the intrinsically
explainable tree-based ML models use a variety of forms of explanation, including feature
importance (in 50% of all cases), rules (38%), and visualization (31%), the deep NN models
being reported as intrinsically explainable rely mainly on visualization (in more than 67% of
all cases). The intrinsically explainable linear and logistic regression ML models, however,
use predominantly feature importance as their form of explanation (in 75% of all cases).
In the most frequent XAI application domain, namely health, the use of tree-based ML
models is predominant, as the tree-based models are used in 28% of all health applications,
followed by (deep) neural networks (22%), and interestingly fuzzy logic (11%), while all
other models were used only once in health. Given the known history of the development
of ML methods in the field of medicine and healthcare, where the ability to validate
predictions is as important as the prediction itself, and consequently the key role of decision
trees [277], this result does not even surprise us.
With regard to other application domains, we can see that intrinsically explainable ML
models, like tree-based models and linear or logistic regression models, are used for finance
and education applications much more often than other ML models. While the financial
domain represents only 1% of (deep) neural network applications, it represents 6% of all
tree-based ML model applications (used for credit risk estimation [192,193], risk manage-
Appl. Sci. 2024, 14, 8884 14 of 111

ment in insurance [195], financial crisis prediction [194], investment decisions and algo-
rithmic trading [189], and asset pricing [186]) and even 9% of linear or logistic regression
applications (used primarily for credit risk assessment [190] and prediction [193], as well
as financial decision-making processes [189]). While post-hoc explainability methods, pri-
marily SHAP and LIME, are the most favored in the financial sector [189], intrinsically
explainable modes are gaining popularity for revealing the insights and are being used
for stock market analysis [278] and forecasting [279], profit optimization, and predicting
loan defaults [191]. Education represents 2% of all applications of tree-based ML models
(including early prediction of student performance [208], predicting student dropout [209],
and advanced learning analytics [211]) and 4% of linear or logistic regression models (such
as pedagogical decision-making [211] and prediction of post-graduate success and alumni
income [210]), while (deep) neural networks are used for comparison with other methods
in only two of all the reviewed XAI papers concerning education [209,211].

4.5. Evaluating XAI


The use of well-defined and validated metrics for evaluating the quality of XAI results
(i.e., explanations) is of great importance for widespread adoption and further development
of XAI. However, a significant number of authors still use XAI methods as a sort of add-on
to their ML models and results without properly addressing the quality aspects of provided
explanations, and only a few articles in our corpus use metrics to quantitatively measure
the quality of their XAI results (Figure 11). More than 58% of the reviewed articles applied
XAI but did not provide any evaluation of their XAI results (e.g., [65,121,170,175,186]).
Among those that evaluated their XAI results, most relied on anecdotal evidence (20% of
the reviewed articles, e.g., [185,245,249,272,280]). In approximately 8% of papers, the au-
thors evaluated their XAI results by asking domain experts to evaluate the explanations
(e.g., [66,70,89,114,167]). In approximately 19% of papers, however, some sort of quantita-
tive metrics are used to provide the quality assessment (e.g., [94,179,187,242,244,281]).
These numbers are in line with a recent review article about XAI evaluation methods
that also highlighted the lack of reporting metrics to measure explanation quality, according
to Nauta et al. [4], only one in three studies that developed XAI algorithms evaluates
explanations with anecdotal evidence, and only one in five studies evaluates explanations
with users. Also, Leite et al. state that “evaluation measures for the interpretability of
a computational model is an open issue” [282]. To address this issue, they introduced
an interpretability index to quantify how a granular rule-based model is interpretable
during online operation. In fact, the gap of “no agreed approach on evaluating produced
explanations” [283] is often mentioned as future work. Having such a metric would solve
several XAI issues, such as decreasing the risk of confirmation bias [283,284].
For this purpose, we further analyzed the articles that used metrics to measure expla-
nation quality, primarily to see what the authors reported about the explainability of their
results. Since different ML tasks and/or ML models may focus on different aspects, we
divided the analysis according to the main task of the ML model.
In the case when metrics have been used for evaluating the quality of clustering,
segmentation, and other unsupervised ML methods’ explanations, the findings highlight
that the evaluated XAI approaches provided accurate, transparent, and robust explana-
tions, aiding in the interpretation of the ML models and results (e.g., [285,286]). Human
and quantitative evaluations confirmed the methods’ superiority in generating reliable,
interpretable, and meaningful explanations [287], despite occasional contradictory insights
that proved useful for identifying anomalies [269].
For the reinforcement learning applications, the findings of papers assessing their
XAI results by evaluation metrics demonstrate that the proposed methods effectively
explained complex models and highlighted the potential of Shapley values for explainable
reinforcement learning [288]. Additionally, participants using the AAR/AI approach
identified more bugs with greater precision [289], and while explanations improved factory
layout efficiency, their interpretability remains an area for improvement [290].
Appl. Sci. 2024, 14, 8884 15 of 111

Figure 11. Evaluation of the explanations in recent XAI application papers.

The findings of the papers using regression as their main task, which used some metric to
evaluate the explanations, underscore the critical role of explainability techniques like Shapley
and Grad-CAM in enhancing model interpretability and accuracy (e.g., [157,291]) across vari-
ous domains, from wind turbine anomaly detection [244] to credit card fraud prediction [187].
While global scores aid in feature selection, semi-local analyses offer more meaningful in-
sights [292]. XAI methods revealed system-level insights and emergent properties [293],
though challenges like inconsistency, instability, and complexity persist [157,294]. User studies
and model retraining confirmed the practical benefits of improved explanations [213,295].
However, the authors mentioned that the explainability of their results was limited by the
lack of suitable metrics for evaluating the explainability of algorithms [294].
Finally, for the most frequent ML task of classification, the analysis of the papers,
which used some metrics to evaluate their explainability results, emphasizes the impor-
tance of explainability in enhancing model transparency, robustness, and decision-making
accuracy across various applications, from object detection from SAR images [182] and
hate speech detection [296] to classification of skin cancer [32] and cyber threats [297]. Tech-
niques like SHAP, LIME, and Grad-CAM provided insights into feature importance and
model behavior (e.g., [124,298,299]). In some situations, the adopted XAI methods showed
improved performance and more meaningful explanations, aiding in tasks like malware
detection [177], diabetes prediction [82], extracting concepts [298], and remote sensing [300].
Evaluations confirmed that aligning explanations with human expectations and ensuring
local and global consistency are key to improving the effectiveness and trustworthiness
of AI systems [235]. The authors concluded that while explanation techniques showed
promise, there is still a long way to go before automatic systems can be reliably used in
practice [32], and widely adopted XAI metrics can help here a lot.
In summary, the results reveal distinct preferences and practices in using XAI. Tree-
based models, commonly used in health applications, employ various explanation forms
like feature importance, rules, and visualization, while deep neural networks primarily
utilize visualization. Linear and logistic regression models favor feature importance.
In finance and education, tree-based and regression models are more prevalent than deep
neural networks. However, despite the widespread application of XAI methods, evaluation
practices remain underdeveloped. Over half of the studies did not assess the quality of
their explanations, with only a minority using quantitative metrics. There is a need for
standardized evaluation metrics to improve the reliability and effectiveness of XAI systems.

5. Discussion and Conclusions


This systematic literature review explored recent applications of Explainable AI (XAI)
over the last three years, identifying 664 relevant articles from the Web of Science (WoS).
After applying exclusion criteria, 512 articles were categorized based on their application
Appl. Sci. 2024, 14, 8884 16 of 111

domains, utilized techniques, and evaluation methods. The findings indicate a domi-
nant trend in health-related applications, particularly in cancer prediction and diagnosis,
COVID-19 management, and various other medical imaging and diagnostic uses. Other
significant domains include environmental and agricultural applications, urban and indus-
trial optimization, manufacturing, security and defense, finance, transportation, education,
psychology, social care, law, natural language processing, and entertainment.
In health, XAI has been extensively applied to areas such as cancer detection, brain
and neuroscience studies, and general healthcare management. Environmental applications
span earthquake prediction, water resources management, and climate analysis. Urban
and industrial applications focus on energy performance, waste treatment, and manufac-
turing processes. In security, XAI techniques enhance cybersecurity and intrusion detection.
Financial applications improve decision-making processes in banking and asset manage-
ment. Transportation studies leverage XAI for autonomous vehicles and marine navigation.
The review also highlights emerging XAI applications in education for predicting student
performance and in social care for child welfare screening.
In categorizing recent XAI applications, we aimed to identify and highlight the most
significant overarching themes within the literature. While some categories, such as “health”,
are clearly defined and widely recognized within the research community, others, like “in-
dustry” and “technology”, are broader and less distinct. The latter categories encompass a
diverse range of applications, reflecting the varied contexts in which XAI methods are em-
ployed across different sectors. This categorization approach, though occasionally less precise,
captures the most critical global trends in XAI research. It acknowledges the interdisciplinary
nature of the field, where specific categories may overlap or lack the specificity found in
others. Despite these challenges, our goal was to provide a comprehensive overview that
highlights the most prominent domains where XAI is being applied while recognizing that
some categories, by their nature, are more general and encompass a wider array of subfields.
By far the most frequent ML task among the reviewed XAI papers is classification,
followed by regression and clustering. Among the used ML models, deep neural networks
are predominant, especially convolutional neural networks. The second most used group of
ML models are tree-based models (decision and regression trees, random forest, and other
types of tree ensembles). Interestingly, there is no substantial difference between the major
ML models with regard to the ML task of their target application.
Feature importance, referring to techniques that assign a score to input features based on
how useful they are at predicting a target variable [26], is the most common form of explana-
tion among the reviewed XAI papers. Some sort of visualization, trying to visually represent
the (hidden) knowledge of a ML model [301], is used very often as well. Other commonly
used forms of explanation include the use of saliency maps, rules, and counterfactuals.
Regarding methods, local explanations are predominant, with SHAP and LIME being
the most commonly used techniques. SHAP is preferred for its stability and mathematical
guarantees [240], while LIME is noted for its model-agnostic nature but criticized for its
instability [32]. Gradient-based techniques such as Grad-CAM, Grad-CAM++, SmoothGrad,
LRP, and Integrated Gradients are frequently used for image and complex data [179,182].
In general, post-hoc explainability is much more frequent than the use of some intrinsically
explainable ML model. However, only a few studies quantitatively measure the quality of
XAI results, with most relying on anecdotal evidence or expert evaluation [4].
In conclusion, the recent surge in XAI applications across diverse domains underscores
its growing importance in providing transparency and interpretability to AI models [4,5].
Health-related applications, particularly in oncology and medical diagnostics, dominate
the landscape, reflecting the critical need for explainable and trustworthy AI in sensitive
and high-stakes areas. The review also reveals significant research efforts in environmen-
tal management, industrial optimization, cybersecurity, and finance, demonstrating the
versatile utility of XAI techniques.
Despite the widespread adoption of XAI, there is a notable gap in the evaluation of
explanation quality. The analysis of how the authors evaluate the quality of their XAI
Appl. Sci. 2024, 14, 8884 17 of 111

approaches and results revealed that in the majority of studies, the authors still do not
evaluate the quality of their explanations or simply rely on subjective or anecdotal methods,
with only a few employing rigorous quantitative metrics [284]. Cooperation with domain
experts and including users can greatly contribute to the practical usefulness of the results,
but above all, more attention needs to be paid to the development and use of well-defined
and generally adopted metrics for evaluating the quality of explanations. It turns out that
in such a case, we can expect reliable, interpretable, and meaningful explanations with a
significantly higher degree of confidence. There is an urgent need for standardized evaluation
frameworks to ensure the reliability and effectiveness of XAI methods, as well as to improve
the interpretability and stability of explanations. The development of such metrics could
mitigate risks like confirmation bias and enhance the overall robustness of XAI applications.

Limitations and Future Work


This systematic literature review has several limitations that should be acknowledged.
Firstly, the review relied exclusively on the WoS database to identify and retrieve relevant
studies. While WoS is recognized as one of the most prestigious and widely utilized
research databases globally, known for its rigorous indexing standards and the high quality
of its data sources [38], the reliance on a single database may introduce a potential bias
by omitting relevant literature indexed in other databases such as Scopus, IEEE Xplore,
or Google Scholar. However, it is important to note that the comprehensive nature of WoS
mitigates this limitation to some extent. WoS encompasses a vast array of high-impact
journals across various disciplines, ensuring that the most significant and influential works
in the field of XAI are likely to be included. Moreover, the substantial volume of results
yielded from WoS alone necessitated a practical constraint on the scope of the review.
Including additional databases would have exponentially increased the literature volume,
rendering the review process unmanageable within the given resources and timeframe.
Secondly, the exclusion criteria applied in this review present additional limitations.
Only studies published in English were included, which could potentially skew the find-
ings by overlooking valuable contributions from non-English-speaking researchers and
regions. Furthermore, the review was limited to studies published after 2021 to ensure
the “recentness” of the applications of XAI. While this criterion was essential to focus
on the latest advancements and trends, it may have excluded foundational studies that,
although older, remain highly relevant to the current state of the field. Additionally, the re-
view was restricted to journal articles, excluding conference papers that often publish
seminal work, particularly in the fast-evolving domain of XAI. Given the considerable
volume of literature, including conference papers would have extended the scope beyond
what was feasible within the current study.
Moreover, the review process involved manually reading and categorizing each paper
to develop detailed codes, allowing for a nuanced analysis of the literature. While more
automated approaches to systematic reviews could have incorporated a broader range
of sources, such methods may lack the precision and depth achieved through manual
categorization. Future research could explore the use of automated methods to include
key conference papers and older foundational studies, providing a more comprehensive
understanding of the field’s development over time. However, for this review, our focus on
recent journal publications, combined with an in-depth manual analysis, was necessary to
provide a manageable and focused examination of the most current trends in XAI.
In summary, while these limitations—namely, the reliance on a single database, lan-
guage restrictions, the specific timeframe, and the focus on journal articles excluding con-
ference papers—are noteworthy, they were necessary to manage the scope and ensure a
focused and feasible review process. Future research could address these limitations by
incorporating multiple databases, including non-English studies, expanding the temporal
range to include older foundational work, and considering a broader set of sources, such as
conference papers. This approach would provide a more comprehensive overview of the
literature on XAI and its development over time. Finally, it is important to highlight that the
Appl. Sci. 2024, 14, 8884 18 of 111

field of XAI is rapidly evolving. During the course of conducting and writing this review,
numerous additional relevant articles emerged that could not be incorporated due to time
constraints. This underscores the dynamic and ongoing nature of research in this area.

Author Contributions: Conceptualization, M.S.; methodology, M.S.; validation, M.S. and V.P.; formal
analysis, M.S. and V.P.; investigation, M.S. and V.P.; resources, M.S.; data curation, M.S. and V.P.;
writing—original draft preparation, M.S. and V.P.; writing—review and editing, M.S. and V.P. All
authors have read and agreed to the published version of the manuscript.
Funding: The work by M.S. was supported by the K.H. Renlund Foundation and the Academy of
Finland (project no. 356314). The work by V.P. was supported by the Slovenian Research Agency
(Research Core Funding No. P2-0057).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The review was not registered; however, the dataset created during the
full-text review, including predefined codes and protocol details, is available from the first author
upon request.
Acknowledgments: The authors (M.S. and V.P.) would like to thank Lilia Georgieva for serving with
them as a guest editor of the special issue on “Recent Application of XAI” that initiated this review.
Conflicts of Interest: The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript:

ALE Accumulated Local Effects


AI Artificial Intelligence
CAM Class Activation Mapping
COVID-19 Coronavirus Disease
CYP Cytochrome P450
DT Decision Tree
ICU Intensive Care Unit
IEEE Institute of Electrical and Electronics Engineers
IG Integrated Gradients
IML Interpretable Machine Learning
IoT Internet of Things
k-NN k-Nearest Neighbor
LIME Local Interpretable Model-agnostic Explanations
LR Logistic Regression
LRP Layer-wise Relevance Propagation
ML Machine Learning
NN Neural Network
PDP Partial Dependency Plots
PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analysis
RISE Randomized Input Sampling for Explanation
SHAP SHapley Additive exPlanations
SVM Support Vector Machine
WoS Web of Science
XAI Explainable Artificial Intelligence
Appl. Sci. 2024, 14, 8884 19 of 111

Appendix A. Included Articles

Table A1. Included articles in our corpus of recent applications of XAI articles, their application, and
the reasons why the authors argue that explainability is important in their application.

Authors & Year XAI Application Why Explainability Is Important


Li et al. (2023) [94] Face mask detection To verify the models predictions.
The key to AI deployment in the clinical
environment is not the model’s accuracy
but the explainability of the AI model.
Zhang et al. (2022) [65] Diagnosis and surgery
Medical AI applications should be
explained before being accepted and
integrated into the medical practice.
For explaining the logic behind decisions,
Detect and explain Autism describe the strengths and weaknesses of
Hilal et al. (2022) [121]
Spectral Disorder decision-making and offer insights about
the upcoming behaviors.
To assure transparency in banking and
Decision-making in banking and finance finance and to assure that in case of
Manoharan et al. (2023) [185]
sector applications deception occurrence, the individual will
be identified clearly in the sector.
To better understand the behavior of
Rjoub et al. (2023) [175] Cybersecurity cyber threats and to design more
effective defenses.
Explainability increases transparency
Astolfi et al. (2023) [244] Wind turbine maintenance
and trustworthiness.
Economic data is noisy, and there are
many correlations. Explainability
Berger (2023) [186] Asset pricing
increases understanding of economically
relevant variables and correlations.
Explainability increases transparency for
the user and gives more insight into the
Alqaralleh et al. (2022) [170] Intrusion detection
decisions/recommendations made by the
intrusion detection system.
Machine learning models are becoming
more and more difficult, and
Neghawi et al. (2023) [272] Evaluating performance of SSML explainability is needed to be able to
evaluate questionable outcomes of
the models.
Traceability of the decision the model
makes increases the credibility of the
Meskauskas et al. (2022) [302] Risk assessment model and can be achieved by
implementing explainability techniques
on the model.
ML models that process time series data
are often quite complex (due to the
Sensitivity of XAI models on time
Fouladgar et al. (2022) [242] nature of time series data), and so
series data
explainability would increase usability of
time series data in ML.
Appl. Sci. 2024, 14, 8884 20 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


ML models consume a lot of energy, and
XAI implementations can reduce the
Jean-Quartier et al. (2023) [245] Tracking emissions of ML algorithms amount of energy needed to get the
wanted outcomes of utilizing a
ML model.
Explainability is added to increase
Almohimeed et al. (2023) [280] Cancer prediction
efficiency and reliability.
They say explainability here is essential
for stakeholders in the film industry to
gain insights into the model’s
Leem et al. (2023) [303] Box office analysis decision-making process, assess its
reliability, and make informed decisions
about film production, marketing, and
distribution strategies.
Lack of explainability is hindering the
deployment of ML systems because the
results cannot be interpreted by domain
Lightpath quality of experts. With a better understanding of
Ayoub et al. (2023) [304]
transmission estimation the model’s decision-making process,
domain experts can evaluate decisions
and make better choices when designing
a network.
Explainability would give information on
what parts of a picture of a galaxy are
Bhambra et al. (2022) [249] Image processing in astronomy
important for classification for the CNN
used for the task.
They mentioned that while heat maps
generated by the model may be
informative for data scientists, they are
poorly understandable by non-expert
users. Therefore, the inclusion of a
module to transform heat maps into
sentences in natural language was
deemed necessary to enhance
interpretability for a wider audience.
Arrotta et al. (2022) [243] Sensor-based activity recognition
Additionally, providing explanations in
natural language targeted towards
non-expert users was highlighted as a
key aspect of their work to ensure that
the rationale behind the classification
decisions made by the model could be
easily understood and trusted by
individuals without a deep
technical background.
Getting an explanation of why the ML
model predicts an earthquake enables
Earthquake spatial probability interpretation and evaluation based on
Jena et al. (2023) [130] assessment (predicting where the expertise and knowledge of the area, and
earthquake hits) therefore one can judge if the model
performs and/or if there is actually a risk
of an earthquake.
Appl. Sci. 2024, 14, 8884 21 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability techniques provide valuable
insights on ML-made decisions, which is
valuable for decision-making in water
Alshehri et al. (2023) [131] Groundwater quality monitoring
quality management. When important
features are known, decisions can be made
to focus on getting them better first.
Mistakes made by self-driving cars can
lead to dangerous accidents.
Explainability gives insight into why
Kim et al. (2022) [197] Safety of self-driving cars
models make mistakes and, therefore,
leads to better development and
safer cars.
Explanations on model predictions help
users understand significant features that
Raval et al. (2023) [187] Predicting credit card fraud
the LSTM model predicts credit card
fraud with.
The authors highlighted the
explainability of deepfake voice detection
to ensure the system’s reliability and
trustworthiness by allowing users to
understand and trust its decisions. They
Lim et al. (2022) [179] Detecting deepfake voice aimed to deliver interpretations at a
human perception level, making the
results comprehensible for non-experts.
This approach differentiates human and
deepfake voices, improving the
system’s effectiveness.
Applying explainability methods in ML
models used in the healthcare field is
important because otherwise
Vieira et al. (2023) [122] Detection of epileptic seizures
practitioners cannot understand the
reasons behind decisions made by
ML models.
Earthquakes can lead to significant
financial losses and casualties, and that is
why ML models for earthquake
Jena et al. (2023) [129] Predicting an earthquake prediction are developed. Because ML
models get more complex, explainability
is needed to interpret the results and to
design better models.
In system prognostics and health
management, explainability is needed to
increase the reliability of decisions made
by remaining useful lifetime prediction
models and also to gain knowledge on
Prognostic lifetime estimation of what parts caused the engine to fail.
Youness et al. (2023) [153]
turbofan engines Increasing the reliability of remaining
useful lifetime models is important
because too early maintenance is a
useless cost, and too late maintenance
results in unexpected downtime, which is
also a useless cost.
In healthcare, doctors need explanations
Ornek et al. (2021) [67] Detecting health status of neonates of the ML model’s decisions so they can
make the right decisions in patient care.
Appl. Sci. 2024, 14, 8884 22 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors highlight that explainability
is essential because it builds trust and
transparency in healthcare and supports
doctors’ decision-making through visual
cues like heatmaps. With the help of this
Sarp et al. (2021) [69] Chronic wound classification
method, AI decisions are more
understandable for non-experts and can
provide unexpected insights, improving
wound management and
treatment outcomes.
Grinding is a part of the process of
manufacturing devices and machines and
their parts in many (critical) fields.
Post-process quality control can be long
Average surface roughness prediction in
Hanchate et al. (2023) [158] and costly, and so quality control is
smart grinding process
shifting towards in-line processes. In-line
quality control is often achieved with ML
methods, and explainability gives
important insight into key features.
In detecting anomalies in sensitive fields
(like healthcare and cyber security), it is
Interpretable ML model for (general) important that decisions made by ML
Aguilar et al. (2023) [305]
anomality detection models are interpretable, because actions
based on those decisions can cause
serious harm if they are wrong.
Modern ML models are quite good at
facial recognition but give no insight in
their decision-making process. In facial
del Castillo Torres et al. (2023) [306] Facial recognition
recognition, explainability is needed to
gain confidence in ML methods and
their solutions.
The authors mentioned that
explainability is essential for improving
Age-related macular the robustness, performance, and clinical
Wang et al. (2023) [70]
degeneration detection adoption of AI-based models in medical
applications, especially when it comes to
tasks like AMD detection.
The authors highlight that explainability
supports technical validation and model
improvement. As well as ensuring the AI
Dewi et al. (2023) [307] Image captioning system can be trusted and effectively
utilized, particularly in assistive
technologies for visually
impaired individuals.
Medical devices use actual rather than
synthetic Legal regulations can hinder
the use of ML models in medical imaging
because the models are not interpretable.
Interpretability increases available use
Detecting COVID-19 with cases for ML in medical imaging because
Ghnemat et al. (2023) [52]
medical imaging of this issue. Explainability also increases
fairness in diagnostic work because
practitioners are able to evaluate models’s
decisions. Explainability with a good ML
model can also give new information
about illnesses (COVID in this case).
Appl. Sci. 2024, 14, 8884 23 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Financial institutes use an increasing
amount of AI in bank loan
decision-making processes, and these
Predicting decision to approve or reject
Martinez et al. (2023) [188] decisions can affect the loan applicants
a loan
significantly. Explainability is needed to
evaluate AI-based decisions and to
improve the models.
Explainability adds reliance and trust
towards ML systems. Explainability can
also shift the focus on decision-making
Younisse et al. (2022) [171] Intrusion detection
from humans to AI. Trust and reliability
are as important in intrusion detection
as efficiency.
Modeling and AI are crucial to
determining hypercyclone operational
variables and their impact on particle sizes.
Chelgani et al. (2023) [155] Modeling hydrocyclone performance Explainability techniques applied to AI
methods can help to gain insight on the
sensitivities of industrial modeling
(hypercyclone processes in this case).
It is constant demand in the healthcare
field to make processes less costly while
ensuring the quality of patient care doesn’t
drop, and AI can help reduce costs by
Identifying reasons for taking MRI scan
Rietberg et al. (2023) [66] increasing efficiency. Explainability can
from MS (multiple sclerosis) patient
make AI more trustworthy, and it is also
crucial that medical professionals know
the reasons behind AI-made decisions
(patient health is on the line).
The authors did not directly state why
Credit related problems, fraud detection, explainability is important for their specific
risk assessment, investment decisions, application. But as a summary, they
Martins et al. (2024) [189]
algorithmic trading and other financial highlighted that explainability is critical for
decision-making processes ensuring transparency, trust, and informed
decision-making in the financial domain.
Explainability with predictive AI methods
can give more insight on important factors
Diaz et al. (2022) [206] Churn prediction that lead to churn. When it is known why
valuable customers churn, decisions can be
made to avoid that.
COVID-19 is a quickly evolving disease,
and all the features influencing the course
Predicting the need of ICU on of the illness are not understood. With the
Lohaj et al. (2023) [53]
COVID-19 patients help of AI and XAI methods, more
understanding of COVID-19 illness can be
gained.
The authors mentioned that the
explainability can address the “black box”
nature of the deep learning models that
they are using in their domain. They are
Geetha et al. (2022) [163] Identification of concrete cracks
aiming to generate high-quality,
interpretable explanations of the decisions
for concrete crack detection
and classification.
Appl. Sci. 2024, 14, 8884 24 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability would enable better
decision-making based on AI-based
knowledge on climate change because
good decisions cannot be made based on
uncertain knowledge, and wrong
Clare et al. (2022) [132] Predicting ocean circulation regimes
decisions can have wide-ranging impacts.
For example, decisions made on the
coasts where sea level rise can cause great
damage need to be based on interpretable
knowledge to ensure safety.
The authors highlighted that for practical
usage in the healthcare context, AI
models must be able to explain things to
people. And also, understanding how
those models work is essential for
Zhang et al. (2023) [89] Medical text processing
adopting and using medical AI
applications. And also, they mentioned
explainability is necessary to ensure the
acceptability of AI in medicine and to use
in clinical applications.
Professionals using the AI-made
decisions need explainability to trust the
models. EU and GDPR also require
certain levels of interpretability from ML
Classifying psychological traits from
Ramon et al. (2021) [212] models (especially on applications in
digital footprints
critical areas). Interpretability would
increase understanding on the issue that
ML is used for and also reveal relations
that wouldn’t have been found otherwise.
They mentioned that this helps doctors
and patients understand the reasons
behind the automated diagnosis made by
the ML models. And also, they say,
experts can provide better medical
Alkhalaf et al. (2023) [308] Cancer diagnosis
interpretations of the diagnosis and give
suitable treatment options using this
explainability. They also mentioned this
can build trust between patients, medical
staff, and AI in the medical field.
It is hard to convince stakeholders and
Intelligent system fault diagnosis of the
Noh et al. (2023) [164] engineers of ML models usability if they
robotic strain wave gear reducer
are not interpretable.
Use cases of AI are increasing, also in
fields that use a lot of image-based AI
(medical fields, for example). LIME is
Interpretation of ML results from usually used for text data or numerical
Chen et al. (2023) [281]
image data data. Methods for applying LIME to
image data would increase the use cases
of explainability methods for image
processing AI.
The authors mentioned that the
explainability contributes to a greater
Water resources management understanding of hydrological processes
Nunez et al. (2023) [133]
(snowmelt-driven streamflow prediction) and ensures the trust and transparency of
the models and decision-making
processes used in this context.
Appl. Sci. 2024, 14, 8884 25 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Models can sometimes make decisions
based on wrong or irrelevant information.
Chowdhury et al. (2023) [154] Fault prediction on 3D printer Interpretability increases trust because
the user can evaluate if the ML-made
decision makes sense.
AI is nowadays very good at producing
text that seems human-made, and
detection techniques are needed to
ensure safety and prevent identity theft.
Shah et al. (2023) [223] Detecting AI-generated text
Interpretation techniques give more
insight and help evaluate detection
models’ decisions, which increases the
benefits of detection model usage.
The authors mentioned that
Analysis of the impact of land cover explainability helps to understand the
Kolevatova et al. (2021) [134]
changes on climate complex relationships between land
cover changes and temperature changes.
The authors mentioned that
explainability can be used for users to
understand the results and to trust the
decisions of the algorithms. And also,
Mehta et al. (2022) [296] Social media analysis they highlighted that explainability is
essential for gaining trust from AI
regulators and business partners,
enabling commercially beneficial and
ethically viable decision-making.
Neural network models are becoming
more accurate but also more complex and
harder to understand. In the domain of
cybersecurity, it is important to know the
Ferretti et al. (2022) [174] Detecting vulnerabilities in source code
reasons behind AI-made decisions when
it comes to source code vulnerability
detection, because wrong decisions can
lead to disaster.
In the field of natural language
processing, sentence embedding models
do not tend to perform very well. It is not
sure how to make sentence embedding
Cha et al. (2024) [217] Sentence embedding
models perform better. In this study,
explanations are added in the middle of
the model to enhance
model performance.
They mentioned that explainability is
essential to enhance usability, trust, and
safety in the context of decision-making,
Marine autonomous surface AI functionality, sensory perception, and
Veitch et al. (2021) [198]
vehicles engineering behavior. The aim was to build trust
among ASV users by
providing transparent and
understandable representations.
Appl. Sci. 2024, 14, 8884 26 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Interpretability of ML models in the
structural engineering domain is
important so (1) engineers can identify
reasons behind model-based decisions,
Predicting strength characteristics
Kulasooriya et al. (2023) [162] (2) users and domain experts can trust
of concrete
more in ML-made decisions, and (3)
proposed methods can be explained
clearly for the non-technical community
(especially without knowledge of AI).
In predictive process monitoring,
stakeholders need ML-based decisions to
Elkhawaga et al. (2022) [156] Predictive process monitoring be interpretable so they can evaluate
them properly and make good business
decisions with the help of AI.
The authors mentioned that it is
important to have an explanation due to
the lack of interpretability of the
classification models used in this context.
Nascita et al. (2021) [309] Mobile traffic classification Further, they highlighted that lack of
explainability can cause untrustworthy
behaviors, lack of transparency, legal and
ethical issues, and especially in
cybersecurity applications.
It is a security threat when a technician
operates an ML-based intrusion detection
Larriva-Novo et al. (2023) [172] Intrusion detection system without interpretability in the
model or knowledge of AI. This can also
lead to a lack of trust in AI and ML tools.
The authors mentioned that infant fNIRS
data are still quite limited, and by using
XAI learning and inference mechanisms,
Andreu-Perez et al. (2021) [120] Cognitive neuroscience development they can overcome that limitation. And
also, they mentioned that XAI provides
explanations for classification in their
context.
The goal of predictive process monitoring
is to inform stakeholders about how
business processes are operating now and
in the near future. When business
processes are described by black-box
El-khawaga et al. (2022) [157] Predictive process monitoring models (which is often the case),
stakeholders don’t get good explanations
on ML-made decisions, which reduces
trust. Interpretability is needed to
increase trust and to help stakeholders
make data-driven decisions.
Interpretability can increase ML model
usage in the field of cancer prediction,
especially in more complex use cases.
Explainability increases the amount of
Silva-Aravena et al. (2023) [310] Cancer prediction knowledge gained from ML-made
decisions, which leads to better
decision-making in patient care.
Explainability of ML models benefits
both practitioners and management.
Appl. Sci. 2024, 14, 8884 27 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Highly performing black-box AI models
can be insufficient if they make
predictions based on the wrong features.
Interpretability gives information on a
model’s decision-making process and so
Bjorklund et al. (2023) [311] Explainable AI methods
can lead to the development of better AI
models. Interpretability is crucial in
safety-critical fields (like medical) and
when finding new information (like
physics research).
The authors mentioned that the use of
explainability can improve user
experience and trust by providing clear
and understandable explanations of the
system’s behavior. And also, they
mentioned it can lead to great acceptance
Dobrovolskis et al. (2023) [312] Agent development and adoption by users of the systems.
They highlighted that the explainability
in the smart home domain is essential
due to the sensitive and high-risk nature
of some AI applications that are closely
related to human lives, wellness,
and safety.
The authors mentioned that
explainability increases the user’s
confidence in the decision-making
process with existing ML models that are
Kamal et al. (2022) [76] Glaucoma prediction limited to glaucoma prediction. And also,
they mentioned that explainability
provides convincing and coherent
decisions for clinicians/medical experts
and patients.
Explainability increases trust towards AI
systems and safety for use in the medical
Kumar et al. (2021) [114] Brain tumor diagnosis field. Explainability should also be
measurable so the explanations can
be trusted.
Gas production systems are complex, and
black-box methods are used for product
gas prediction for that reason. Production
Predicting product gas composition and systems can fail without anyone knowing
Pandey et al. (2023) [149]
total amount of gas yield why. Explainability would increase the
use of AI and increase safety and
efficiency due to increased knowledge of
the system.
It is difficult for clinical practitioners to
adopt highly developed AI systems due
to their lack of interpretability. There are
lots of great tools for brain disease
Amoroso et al. (2023) [105] Predicting Alzheimer’s disease prediction that could help diagnose
illness in its early stages. Explainability
would allow people with little to no
knowledge of AI to use these
diagnostic tools.
Appl. Sci. 2024, 14, 8884 28 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Lack of interpretability in black-box
models hinders the development of the
models and use of AI in online gaming.
Explainability is needed to make sure
Cheating detection and player churn
Tao et al. (2023) [228] models are learning the right relations,
prediction in online games
allow practitioners to adjust the model in
problematic cases, make sure that models
perform the same way in an online
setting, and enable easy debugging.
The authors mentioned that
explainability in the context of Vision
Transformers is essential for ensuring
Stassin et al. (2024) [246] Vision transformers
transparency, mitigating biases,
enhancing safety, and promoting trust in
AI systems.
In manufacturing industries, data is
collected straight from the machines and
processed with AI to provide information
about the machines to help making
decisions. Decision makers usually aren’t
Bobek et al. (2022) [167] Hot-rolling process (steel industry)
experts of AI, so they cannot rely fully on
AI-made decisions. Explainability would
make relevant decision-making easier
and adopting AI in decision-making
processes more worthwhile.
As ML models become more widely used,
explainability is needed so the right
decisions can be made based on AI-made
Mollaei et al. (2022) [85] Functional work ability prediction decisions. There are several methods to
analyze an ML model’s performance, but
those don’t give insight on the
decision-making process.
Explainability is needed in complex ML
systems that do face verification so that
ML-made decisions can be trusted.
False-positive results on face recognition
Lin et al. (2021) [178] Face verification used in security applications are a big
threat to security and privacy.
Interpretability increases users’ trust and
helps develop better and more
accurate models.
They mentioned that explainability is
needed for medical professionals to
understand the reasoning behind the
decisions made by the clinical decision
support system (CDSS). And also, they
mentioned this approach enables
Decision support system for the physicians to comprehend the system’s
Petrauskas et al. (2021) [86]
nutrition-related geriatric syndromes assessment errors and identify areas for
improvement. And also, they mentioned
CDSS’s explainability allows less
experienced physicians to pay attention
to nutrition-related geriatric syndromes
and perform detailed examinations of
nutrition-related disorders.
Appl. Sci. 2024, 14, 8884 29 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability of their context is
important to develop a highly precise
model. Further, they recognized the
Thermal management in
Sharma et al. (2023) [161] significance of transparency and
manufacturing process
interpretability in their model,
particularly in the context of predicting
the thermophysical properties
of nanofluids.
The authors mentioned that
explainability enables the interpretation
of complex data patterns, allowing
humans to understand and interpret the
logic behind classifying patterns
efficiently. This is essential for financial
Prediction and recognition of financial
Torky et al. (2023) [194] crisis prediction as it helps in providing
crisis roots
evidence for financial decisions to
regulators and customers, especially
where the results of the AI model may be
inaccurate. And also, they mentioned
that this will help with financial
institutions’ work.
They mentioned that explainability in
this domain is essential because the lack
of transparency in ML models for fault
location in power systems poses a
significant challenge. And also, they
mentioned the black box ML models
make it difficult for power system experts
Perl et al. (2024) [313] Fault location in power systems
to understand the connections between
input bus measurements and the output
fault classification. This can cause less
trust in the model’s recommendations
and makes it challenging to improve
PMU placement for better
fault classification.
The authors mentioned that
explainability helps to address the
trustworthiness of SAR image analytics.
And also they mentioned that
Aircraft detection from synthetic aperture
Luo et al. (2021) [182] explainability helps to provide a better
radar (SAR) imagery
understanding of the DNN feature
extraction effectiveness, select the optimal
backbone DNN for aircraft detection, and
map the detection performance.
Because bark beetles can affect forest
health quickly and in large areas, AI
methods are needed to aid the
Recognition of bark beetle infested recognition of bark beetle infestations.
Andresini et al. (2023) [139]
forest areas Explainability in these AI methods is
important to gain trust in forest managers
and other non-AI-experts and
remote stakeholders.
Appl. Sci. 2024, 14, 8884 30 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability of this context is important
because it helps to gain a deep
understanding of the role of each feature
(SNP) in the model’s predictions. And
van Stein et al. (2022) [140] Plant breeding also, they have mentioned that providing
transparency and interpretability through
sensitivity analysis can enhance the
reliability and applicability of genomic
prediction models in
real-world scenarios.
In peer-to-peer lending, lenders use P2P
platforms to aid their decision-making.
There platforms use complex models that
are hard to interpret (especially without
Moscato et al. (2021) [190] Credit risk assessment
knowledge of AI). Explainability of
AI-made decisions on P2P platforms is
important to help lenders make accurate
loan decisions.
The goal of this study is to find if and to
what extent staff-related issues impact
Non-technical losses in electricity supply non-technical loss of electricity in
Nwafor et al. (2023) [314]
chain in sub-Saharan Africa sub-Saharan Africa. To answer this
research question, feature importance
is necessary.
The authors mentioned that
explainability is essential for their context
because it builds user trust and ensures
faster adoption rates, especially in the
Intelligent decision support for energy sector, where AI can provide a
Panagoulias et al. (2023) [315]
energy management more sustainable future. And also, they
mentioned that it is essential for
providing justification for recommended
actions and ensuring transparency and
interpretability of the analytics results.
Studying judges’ decision-making
process is very sensitive because of their
freedom and juridical independence.
Studying judges’ behavior and
Detecting reasons behind judge’s decision-making is still important to help
Rodriguez Oconitrillo et al. (2021) [214]
decision-making process other judges when they are reviewing
previous cases to help their
decision-making. XAI is an important
tool here because XAI techniques give
insight into the reasons behind decisions.
The authors mentioned that the
explainability is important for BCI
Designing an XAI interface for researchers to understand the decisions
Kim et al. (2023) [316]
BCI experts made by AI models in classifying neural
signals or analyzing signals based on
their domain expertise.
Appl. Sci. 2024, 14, 8884 31 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that providing
explanations for the assignment of items
to classes A, B, and C allows for a better
analysis of the items, easy detection of
misclassifications, improved
Qaffas et al. (2023) [202] Inventory management
understanding of inventory classes, and
flexibility in inventory management
decisions. And also they mentioned
explainability helps make decisions more
transparent and enhances interpretability.
Use of black-box models in critical fields
is increasing, and explainability is much
needed to help users evaluate models’
Improving performance of XAI decisions and increase trust from users.
Wang et al. (2023) [317]
techniques for image classification The efficiency and accuracy of modern
XAI visual explanation methods (CAM,
LIME) can be improved, which is the goal
of this study.
The authors mentioned that human
experts need to understand the
underlying data evidence and causal
reasoning behind the decisions made by
Trust management in intrusion
Mahbooba et al. (2021) [173] AI in their domain. Further, the network
detection systems
administrators can enforce security
policies more effectively for identified
attacks, leading to improved trust in the
systems by providing explanations.
Data manifolds are not much researched,
even when computing on manifolds can
Creating better XAI techniques with
Puechmorel (2023) [318] result in higher performance and the
manifolds and geometry
ability to compute on very
high-dimensional data.
Prediction of pentane content during Explainability is important so users can
Rozanec et al. (2021) [166] liquefied petroleum gas understand the model’s limitations in
debutanization process operational use.
As ML models get progressively more
complex, transparency is needed so the
use of a black-box model can be justified.
Explainability also increases trust toward
Explaining reinforcement AI systems from the user end.
Heuillet et al. (2022) [288]
learning systems Reinforcement learning is an ML
technique that is increasingly used in
critical fields where the interpretability of
the ML model is necessary for the
end users.
Appl. Sci. 2024, 14, 8884 32 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


ML models used for credit score/loan
risk prediction in finance are becoming
increasingly complex, which is contrary
to increasing demand for transparency
from authorities. As loan default is more
Gramespacher et al. (2021) [191] Predicting loan defaults
costly to businesses than the unaccepted
possible clients, the most beneficial
model might not be the same as the most
optimal model; interpretability is needed
to aid in developing these models.
Explainability increases reliability and
Mohamed et al. (2022) [275] Small-object detection therefore accelerates the ML model’s
approval for real-life applications.
There is a growing need to understand
the relationships between features and
predictions in black-box models. The
Predicting spatiotemporal distributions Great Lakes area is so big (84% of North
Xue et al. (2022) [135] of the lake surface temperature in the America’s surface water) that it impacts
Great Lakes the environment very differently than
“regular” lakes. Explainable AI methods
are needed to understand the climate of
the Great Lakes area better.
Complex ML models are widely used in
the areas of IoT and smart cities. Lack of
Attack detection on IoT infrastructures of interpretability is a security issue because
Muna et al. (2023) [181]
smart cities ML models are hard to develop to be
more safe if it is not known how they
make decisions.
Detect paratuberculosis from To help pathologists in the diagnosis
Yigit et al. (2022) [63]
histopathological images of paratuberculosis
XAI is important because power experts
may find it hard to trust the results of
Machlev et al. (2022) [319] Power quality disturbances classification such algorithms if they do not fully
understand the reasons for a certain
algorithm’s output.
The authors mentioned that the
explainability of their context is
important because there is a need for AI
Monteiro et al. (2023) [320] Machine learning model surrogates models that balance the tradeoff between
interpretability and accuracy and explain
the feature relevance in
complex algorithms.
Existing methods based on AI are not
easy to understand or communicate, so
explainability is needed to enhance the
usability of AI systems within users.
Smart technology can also be deemed
Studying sustainability of smart
Chen et al. (2022) [95] unsustainable because it is not easy to
technology applications in healthcare
implement and repair; explainability
makes implementation easier due to
higher trust and understanding; and
repairing is easier because problems can
be located more easily.
Appl. Sci. 2024, 14, 8884 33 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability can help fuzzy models in
bug detection by giving explanations of
which parts of code need to be searched.
Shi et al. (2022) [321] Finding bugs in software
Explainability also gives information
about false positives/negatives, which
helps develop a better model.
Fuzzification of complex DNN is a
difficult task. Explainability simplifies the
decisions made by the DNN model,
which can help in the fuzzification. It is
common in the domain of manufacturing
Chen et al. (2024) [168] Job cycle time prediction
and management that complex black-box
models are used without understanding
of AI, and explainability would lead to
more meaningful model usage and more
efficient processes.
More complex and accurate models are
needed to predict brain metastasis
because there are lots of patients at risk of
Predicting risk of brain metastase on
Li et al. (2023) [124] developing brain tumors. More complex
patients with lung cancer
models are usually not interpretable, so
explanations are needed so
interpretability is not compromised.
The authors mentioned that the
explainability is important in their
The effects of secondary cavitation
domain because it supports
Igarashi et al. (2024) [322] bubbles on the velocity of a
understanding the physical phenomena
laser-induced microjet
related to the influence of secondary
cavitation bubbles on jet velocity.
The authors mentioned that
explainability is useful in their context
because it provides users with an
explanation about individual decisions,
enabling them to manage, understand,
and trust the on-self availability model.
Yilmazer et al. (2021) [203] On-shelf availability monitoring
And also, they mentioned explainability
allows non-experts and engineers in
grocery stores to understand, trust, and
manage AI applications to increase OSA.
Further, they mentioned it provides
transparency and understandability.
EEG data is unstable and complex, which
makes interpreting models that use this
data very difficult to understand.
Attacking ML classifier of EEG Explainability is needed to make sure
Zhang et al. (2023) [180] signal-based human emotion assessment models are doing what they are supposed
system with data poisoning attack to do and also help develop better
models. In attack detection systems,
explainability is needed to identify,
analyze, and explain DP attacks.
Appl. Sci. 2024, 14, 8884 34 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability in their domain strengthens
the interpretive aspect of ML algorithms.
And also, they mentioned XAI models are
Kim et al. (2023) [146] Urban growth modeling and prediction
likely to increase use in urban and
environmental planning fields because
they effectively supplement the black box
features of AI.
Mathematical and statistical models may
not be able to find all relations between
parameters and to predict accurately in
Predicting transient and residual complex settings. When predicting
Ilman et al. (2022) [261]
vibration levels response to vibration, explainability is
needed to find unknown relationships
between parameters and therefore build
more efficient and stable systems.
The authors mentioned that explainability
in their specific application is important
because it provides transparency and
Deperlioglu et al. (2022) [77] Glaucoma diagnosis
improves trust and confidence in the
automated deep learning solution among
medical professionals.
The authors mentioned that the
explainability of their specific application
is important for understanding, trust, and
management of ML methods that are not
directly interpretable. And also, they
Risk management in insurance mentioned XAI techniques are useful for
Bermudez et al. (2023) [195]
savings products risk managers to identify patterns, gain
insights, and understand the limitations
and potential biases of the models, finally
leading to more informed and accurate
decisions for their organizations
and stakeholders.
The authors mentioned that the
explainability of their application is
important for providing insights and
understanding the inner workings of the
black box AI model, especially in
COVID-19 diagnosis. And also they
COVID-19 diagnosis using chest
Sarp et al. (2023) [54] mentioned that the explainability helps
X-ray images
non-expert end-users understand the AI
model by providing explainability and
transparency, which is essential for
feedback and providing more
information to assist doctors in
decision-making.
Lack of explainability can lead to using
less developed AI systems, which is not
optimal in any field and can lead to losses
and other serious consequences.
Improving counterfactual
Soto et al. (2023) [323] Counterfactuals is becoming an
explanation models
increasingly important area of research
for XAI because it provides very
human-like explanations (very
interpretable, hard to misunderstand).
Appl. Sci. 2024, 14, 8884 35 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that to make ML
models crystal clear and authentically
explainable, they have to use
explainability. And also, they mentioned
Ganguly et al. (2023) [80] Diabetes prediction
that lack of explanation and transparency
in AI systems in healthcare can lead to
less trust from patients and
healthcare providers.
Using AI systems in research in social
and behavioral sciences is increasing
because of the high performance of
black-box models. Researchers in these
areas are not usually experts of AI, and
therefore using complex models without
explainability can lead to mistrust
Improving regression model
Messner (2023) [259] towards models, misuse, or wrong
explanation techniques
conclusions. Explainability is needed to
make sure quality, data-driven research
can be made in all fields. Also, more
research is needed to improve the
interpretability of regression models
because XAI research has been focused
on explaining classification models.
Complex AI models without
explainability are used to accept or turn
down loan applications, and lack of
explainability leads to lack of fairness
Rudin et al. (2023) [193] Predicting credit-risk and transparency. When the explanation
technique approximates a ML model,
explanations may not always be accurate
or global-consistent, which again affects
the fairness.
The authors mentioned that
understanding the competitive factors
and points of differentiation from the
customer’s perspective is essential for
product developers. And also, they
mentioned that their method effectively
Han et al. (2022) [324] Competitor analysis
reflects customers opinions, which is
essential for understanding customer
preferences and improving product
competitiveness. Therefore, they
mentioned the explainability of their
specific application.
The fact that deep-learning-based
malware detection models don’t (usually)
provide explanations for their
classification decisions is a cybersecurity
threat. Explainability of malware
detection models would increase user
Jo et al. (2023) [177] Malware detection trust and make integrating these models
into cybersecurity systems more easy and
accessible (because of regulations). Both
malware detection and explanation
models need to constantly improve to
keep up with the improvement
of malware.
Appl. Sci. 2024, 14, 8884 36 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability in their context is
Quach et al. (2023) [141] Disease detection in agriculture important because it is essential for
making in-depth assessments and
ensuring reliability in practice.
Productivity in production is a complex
situation and is hard to model with linear
Hasan et al. (2024) [325] Productivity prediction ML models. More complex models
would perform better, but with the cost
of interpretability.
The authors mentioned that the
explainability is essential here because it
gives the ability to understand why a
tweet has been classified as xenophobic.
They mentioned that tweets can affect
Social media monitoring for xenophobic people’s behavior, and the development
Perez-Landa et al. (2021) [183]
content detection of an XAI model is essential to providing
a set of explainable patterns describing
xenophobic posts. This method can
enable experts in the application area to
analyze and predict xenophobic
trends effectively.
The authors mentioned that
explainability is essential in this context
because the level of automation is
constantly increasing according to the
development of AI. And also, they
Development of advanced
Lorente et al. (2021) [201] mentioned ADAS assists drivers in
driver-assistance systems (ADAS)
driving functions, and it is essential to
know the reasons for the decisions taken.
And also, they said trusted AI is the
cornerstone of the confidence needed in
this research area.
When using AI-based systems for
decision-making in healthcare, it is
important for patient health that the
model is interpretable and that
Raza et al. (2022) [83] Classifying different arrythmias practitioners can justify the model’s
decisions. Both clinical healthcare
practitioners and patients need to trust
the AI system when AI is used for
diagnostic decision-making.
The authors mentioned that the IMC
features are difficult to interpret and
control independently without affecting
other features, and therefore the quality
differences cannot be regarded as the sole
Gim et al. (2024) [165] Optimization for injection molding
response due to the change of a specific
feature. To address this issue, the
explainability requires and interprets the
relationship between the features in the
IMC and each part quality.
Appl. Sci. 2024, 14, 8884 37 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability provides insights into the
features or characteristics of the model
used to make predictions in the context of
Varghese et al. (2023) [104] Alzheimer’s disease classification
AD classification. And also, they
mentioned it is more important for
improving trust in the system and
its results.
The authors mentioned that
Heat rransfer optimization for explainability uncovers the most
Sajjad et al. (2022) [326]
nanoporous coated surfaces influencing surface features for the
nanoporous coatings.
The authors mentioned that it is essential
to understand how the model makes
Aquino et al. (2023) [98] Human activity recognition decisions and to ensure the model’s
predictions are not based on
biased features.
The authors mentioned that
explainability in their specific application
is important to increase the transparency
of the model to improve usability. And
Lee et al. (2023) [169] Yield prediction
also, they mentioned it is important to
improve the decision-making process and
to understand the factors influencing the
semiconductor manufacturing field.
The authors mentioned that
explainability is important in their
context because it extends the
interpretability of ML models and makes
Althoff et al. (2021) [257] Hydrological modeling and prediction the results more understandable to
humans. Also, they mentioned it is
important to uncover how runoff routing
is being resolved and to run black box
models into glass box models.
Global explanations are used to explain
the model as a whole, but those
explanations don’t provide exact
information of where important features
Explaining in both global and local way
Posada-Moreno et al. (2024) [298] are. More research is needed to create
with same method
explanation methods that give both
global and local explanations, because
both of those methods separately have
severe limitations.
Lack of explainability leads to models not
being used in unexplored use cases and
use cases with low amounts of data. In
material science/engineering, there are so
Predicting hardness in alloy based on
Ravi et al. (2023) [327] many different use cases for AI that the
composition and condition
models need to be explainable so they are
trustworthy. Explainability can also help
find new features about physical
phenomena under experiments.
Appl. Sci. 2024, 14, 8884 38 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability enables humans to
interpret and understand the results of
Tasci (2023) [116] Brain tumor classification artificial intelligence, which is crucial in
the medical field for ensuring the safety
and reliability of the diagnostic solutions
offered by deep learning techniques.
Deep learning models can learn
unwanted or wrong correlations that are
not causally related to the classification
Sauter et al. (2022) [328] Computational histopathology
task. Explainability helps detect possible
biases and ensure the model
performs correctly.
The authors mentioned that
explainability is supporting explaining
feature effects and interactions associated
with specific threshold surgical effort.
And also they mentioned surgical
decision-making at cytoreductive surgery
for epithelial ovarian cancer (EOC) is a
Surgical decision-making in advanced
Laios et al. (2022) [51] complex matter, and an accurate
ovarian cancer
prediction of surgical effort is required to
ensure the good health and care of
patients. AI applications are encountered
with several challenges derived from
their “black box” nature, which limits
their adoption by clinicians, and that is
why explainability is important.
The authors mentioned that
explainability is giving the ability to
interpret and verify the decisions made
by ML models that is essential for
Disease classification (Parkinson’s
Kalyakulina et al. (2022) [111] medical experts. And also, they
and Schizophrenia)
mentioned that it helps to improve the
system and understand the internal
mechanics of the model, which can lead
to enhancements and refinements.
Explainability enhances comprehension
and trust, and in this use case, it can
Bhatia et al. (2023) [96] Tracing food behaviors
make ML-based software more
comfortable to use.
They mentioned that explainability in
their context is important because the
increasing adoption of AI demands
understanding the logic beneath the
forecasts to determine whether such
Time series forecasting and
Rozanec et al. (2021) [196] forecasts can be trusted. And also, they
anomaly detection
mentioned understanding when and why
global time series forecasting models
work is essential for users to detect
anomalous forecasts and comprehend the
features that influence the forecast.
Appl. Sci. 2024, 14, 8884 39 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


AI methods are becoming increasingly
important for soil drought prediction due
to climate change. Explainability is
Huang et al. (2023) [136] Soil moisture prediction needed so models can be interpreted and
their decisions evaluated by an end user
that has knowledge of physics (and other
things related).
AI models to detect gamma rays are used
in high-stakes security situations, where
explainability is necessary and crucial to
Detection and quantification of isotopes
Bandstra et al. (2023) [138] avoid model-induced damage.
using gamma-ray spectroscopy
Explainability increases trust and helps
understand and evaluate
model performance.
The authors mentioned that the lack of
transparency in automated processing
Aspiration detection in flexible
conflicts with the European General Data
Konradi et al. (2022) [97] endoscopic evaluation of
Protection Regulation (GDPR), which
swallowing (FEES)
prohibits decisions based solely on
automated processing.
The authors mentioned that the results
Solution development for analyzing and provided by AI models would be more
Mishra et al. (2022) [226] optimizing the performance of agents in a acceptable by end users if there were
classic arcade game “Fuzzy Asteroids” explanations in layman’s terms
associated with them.
The authors mentioned that the
explainability of the AI models used in
the early diagnostics of plant stress using
Lysov et al. (2023) [142] Diagnosis of plant stress the HSI process is essential for
understanding the decision-making
process and the features that contribute to
the diagnostic outcomes.
The authors mentioned that
explainability has the potential to
advance a more comprehensive
understanding of breast cancer metastasis
Yagin et al. (2023) [44] Breast cancer prediction
and the identification of genomic
biomarkers, and it is opening new paths
for transformative advances in breast
cancer research and patient care.
The authors mentioned that in the
domain of autonomous driving, where
decisions based on ML models could
impact human lives, it is essential to
Autonomous vehicles for object detection
Dworak et al. (2022) [199] understand how neural networks process
using LiDAR
data and make decisions. And also, they
mentioned explainability is essential for
ensuring the safety and reliability of
autonomous driving systems.
Appl. Sci. 2024, 14, 8884 40 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


When predicting heavy metals and
groundwater quality, future data might
be much different than historical data.
Explainability is needed to gain trust
from domain experts and ensure usability
when utilizing models that have been
trained with historical data.
Thi-Minh-Trang Huynh et al. (2022) [137] Predicting heavy metals in groundwater
Relationships between heavy metals and
other chemical contents are highly
non-linear, so white-box models give
poor results. Explainability would
increase both implementation and
improvement of ML models used in this
use case.
The authors mentioned that
interpretability and explainability of
predictions are essential in critical areas
like healthcare, medicine, and therapeutic
Bhandari et al. (2023) [110] Parkinson’s disease diagnosis
applications, and while ML models are
effective in predicting outcomes, trust
issues and transparency can be addressed
through explainability.
Interpretable white-box models don’t
perform well when predicting
refrigeration system performance due to
Modeling refrigeration non-linearity in data, so black-box
Akyol et al. (2023) [160]
system performance models are in wide use. Explainability
would increase understanding of how
input values, which are system
components, affect the goal values.
The authors mentioned that
explainability is essential due to the
Vijayvargiya et al. (2022) [99] Human lower limb activity recognition
difficulty in understanding how the
classifiers predicted the actions.
The authors mentioned that
understanding the decisions made by AI
models is essential for ensuring the safety
and reliability of the automated driving
systems. As well as explainability
permits improving the user experience of
automated vehicle networking (In the the offered communication services by
Renda et al. (2022) [200]
context of 6G Systems) helping end users trust (by design) that
in-network AI functionality issues
appropriate action recommendations.
And also, they mentioned explainability
needs at the designing stage to perform
model debugging and
knowledge discovery.
Appl. Sci. 2024, 14, 8884 41 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability in their specific application
is important because it is providing
insights into why the model makes
certain decisions. And also, they
Akilandeswari et al. (2022) [329] Factory/plant location selection
mentioned that this transparency helps
stakeholders understand the reasoning
behind the model’s predictions, enabling
them to make informed decisions and
potentially improve the model further.
The authors mentioned that many
prevailing ML algorithms used in
medicine are often considered black box
models, and this lack of transparency
hinders medical experts from effectively
leveraging these models in high-stakes
decision-making scenarios. Therefore,
Zlahtic et al. (2023) [90] ML model development in medicine
explainability is needed. And also, they
mentioned that by empowering white
box algorithms like Data Canyons, they
hope to allow medical experts to
contribute their knowledge to the
decision-making process and obtain clear
and transparent output.
Explanations given by modern XAI
methods aren’t always intuitive for
non-expert users to comprehend,
especially when it comes to rule-based
Aghaeipoor et al. (2023) [330] Explaining DNN with fuzzy methods
explanations. Fuzzy linguistic
representations in rule explanations
would increase comprehension and
therefore usability of XAI methods.
There are little to no good methods for
creating global explanations for
unstructured data; for structured data,
these methods exist and are valid.
Generating global explanations for model
Lee et al. (2021) [331] Visualizing globally high-level features
using unstructured data
on predictions on unstructured data in an
easily interpretable way is important to
gain knowledge on the model’s
inner processes.
The authors mentioned that
understanding the decisions made by the
Gouverneur et al. (2023) [91] Pain recognition
classifiers is essential for gaining insights
into the mechanisms of pain in detail.
Explainability methods can be used in
image classification tasks to give insight
on relationships between inputs and
Improving image data quality at
Hung et al. (2021) [332] outputs. Explainability is also needed to
preprocessing stage
analyze and demonstrate the importance
of the proposed method for image
quality improvement.
Appl. Sci. 2024, 14, 8884 42 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Black-box models are hard to interpret.
Explainability is necessary to gain
knowledge about the model’s
Detecting Alzheimer’s disease from MRI
Kamal et al. (2021) [106] decision-making process and to find
images and gene expression data
possibly new features that predict and
affect the appearance of
Alzheimer’s disease.
The authors mentioned that the
explainability of their specific domain is
important to enhance the
Qaffas et al. (2023) [285] Inventory management decision-making process by providing
transparent justifications for item
assignments and interpretations of
obtained clusters.
The authors mentioned that
explainability is important in their
specific application to understand why
Dindorf et al. (2021) [68] Pathology-independent classifier subjects were classified, including
instances of misclassification, and to
reduce the black box nature of the
machine learning model.
The authors mentioned that
explainability is providing insights into
the decision-making process of ML
models, particularly in the context of
healthcare and cognitive health
Javed et al. (2023) [72] Cognitive health assessment assessment. This transparency and
interpretability are essential for
understanding how the models identify
and classify activities, especially in
scenarios involving individuals with
dementia or cognitive impairments.
The authors mentioned that
explainability in credit risk assessment is
important to address the trade-off
between predictive power and
interpretability. They mentioned that new
algorithms offer high accuracy but lack
Gramegna et al. (2021) [192] Credit risk estimation
intelligibility with limited understanding
of their inner workings. Therefore, the
use of explainability provides
transparency and insights into why
certain outputs are generated by
these models.
The authors mentioned that
explainability, along with interpretability
Wani et al. (2024) [49] Lung cancer detection
and transparency, is an essential aspect of
AI in healthcare.
The authors mentioned that
explainability is important in their
specific application because the black box
Optimization of membraneless
nature of AI optimization models reduces
Nguyen et al. (2023) [148] microfluidic fuel cells (MMFCs) for
their credibility and hinders additional
energy production
understanding of the importance and
contributions of each feature in the
decision-making process of these models.
Appl. Sci. 2024, 14, 8884 43 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability of their specific application
is important because it enhances trust,
Kuppa et al. (2021) [176] XAI methods in cybersecurity
gives understanding of model decisions,
and addresses security concerns in the
cybersecurity domain.
The authors mentioned that the
explainability of their specific application
is important to provide rice growers with
Iatrou et al. (2022) [143] Prediction of nitrogen requirement in rice
sound nitrogen fertilizer
recommendations
in precision agriculture.
The authors mentioned that
explainability is important to provide
insights into linguistic structures and
Sevastjanova et al. (2021) [218] Question classification patterns. Also, they mentioned that
traditional ML models are black boxes,
making it challenging to extract
meaningful linguistic insights.
The authors mentioned that
explainability is essential for providing
insights into the underlying mechanisms
Real et al. (2023) [92] Drug response prediction
of drug actions, which is critical for
effective clinical decision-making and
patient care.
The authors mentioned that the
explainability of their specific application
needs understanding and explaining the
internal logic of their model. Also, they
Aghaeipoor et al. (2022) [262] Big data preprocessing mentioned that explainability helps
human users to trust sincerely, manage
effectively, avoid biases, evaluate
decisions, and provide more robust
machine learning models.
The authors mentioned that
understanding why a certain prediction
is provided by a black-box model is
essential in modern contexts where the
decisions of an AI system are required to
be transparent and fair, such as for
certification purposes. And also they
mentioned that the proposed method is
Building energy
Galli et al. (2022) [147] providing insight about the behavior of
performance benchmarking
classification models used to benchmark
the energy performance of buildings and
to understand the motivations behind
correct and wrong classifications. This
information is helpful for certification
entities, technical figures, and other
stakeholders involved in the
decision-making process.
The authors mentioned that
understanding the reasons behind the
Kaplun et al. (2021) [45] Cancer cell profiling
test results is essential to relay analyzing,
retraining, or modifying the model.
Appl. Sci. 2024, 14, 8884 44 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
specific application of heart failure
Moreno-Sanchez (2023) [74] Cardiovascular medicine
survival prediction to facilitate healthcare
professionals’ understanding and
interpretation of the model’s outcomes.
The authors mentioned that while
complex models like RNNs offer high
accuracy, they can be challenging to
interpret. Therefore, explainability was
Wongburi et al. (2022) [150] Wastewater treatment
crucial to understand why the algorithm
made certain predictions in the context of
predicting the Sludge Volume Index in a
Wastewater Treatment Plant.
The authors mentioned that
explainability is essential in healthcare
settings, especially in diagnosing diseases
Diabetic retinopathy grading
Obayya et al. (2022) [79] like diabetic retinopathy, to provide
and classification
transparent and interpretable insights
into the decision-making process of the
AI model.
To identify whether or not the model
Heistrene et al. (2023) [333] Electricity price forecasting prediction at a given
instance is trustworthy.
They need visualizations to
Automating the skull stripping from
Azam et al. (2023) [125] detect/segment the brain from
brain magnetic resonance (MR) images
non-brain tissue.
For an interactive visualization tool that
provides explainable artificial intelligence
Detection of abnormal screw (XAI) knowledge for the human
Ribeiro et al. (2022) [334]
tightening processes operators, helping them to better identify
the angle–torque regions associated with
screw tightening failures.
To visualize the decisions of CNN’s
Zinonos et al. (2022) [144] Grape leaf diseases Identification
output layer.
For example, to enable users to locate
Neupane et al. (2022) [184] Intrusion detection/cybersecurity
malicious instructions.
To enable surveillance engineers to
Prediction of undesirable events in
Aslam et al. (2022) [151] interpret black box models to understand
oil wells
the causes of abnormalities.
To provide improved accessibility to
Pisoni et al. (2021) [225] Explanations for art
museums and cultural heritage sites.
To furnish the user with additional
Blomerus et al. (2022) [335] Synthetic aperture radar target detection information about
classification decisions.
Humans’ (domain experts or not) tend to
not trust ML systems if they do not
provide explanations. Sufficient
Estivill-Castro et al. (2022) [336] Human-in-the-loop machine learning
explanations are important because they
show correlations between features and
therefore help understand the model.
Appl. Sci. 2024, 14, 8884 45 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Droughts in prairies are becoming
increasingly worse due to climate change,
and droughts cause losses in agriculture.
Mardian et al. (2023) [152] Predicting drought in Canadian prairies Explainable AI can give insight on what
factors predict or induce drought, and
with this knowledge losses can
be minimized.
ML models do not give information
about feature importance, but it is
important to know genomic features that
affect drug response prediction. XAI is
Park et al. (2023) [93] Predicting drug response
not much researched in drug response
prediction, and because of these reasons,
explainability is necessary and needs to
be researched more in this use case.
The authors mentioned that by extracting
and ranking the most relevant genomic
features employed by the best performing
models, they can provide insights into
the interpretability of the models and the
identification of important motifs for
Danilevicz et al. (2023) [145] Plant genomics IncRNA classification. And also, they
mentioned that explainability is essential
for understanding the underlying
mechanisms driving the classification of
IncRNAs and for gaining insights into the
regulatory motifs present in
plant genomes.
The authors mentioned that explainable
ML provides human-understandable
insights about the mechanism used by
Predictive maintenance in
Alfeo et al. (2022) [159] the model to produce a result, such as the
manufacturing systems
contribution of each input in the
prediction, and this is essential in the
context of predictive maintenance.
Lack of explanations makes ML systems
incomprehensible for medical experts.
Explanations are needed to make sure the
doctor can be the one that makes the final
Sargiani et al. (2022) [55] Predicting COVID-19
decision. Explainability can also help
detect biases in models, because biases
are not uncommon in COVID prediction
models due to unbalanced data.
The authors mention the importance of
explainability in their specific application
to shed light on why one agent is more
important than another in a cooperative
game setting. Also, the authors
Angelotti et al. (2023) [337] Cooperative multi-agent systems mentioned that they can provide insights
into the factors that influence the
achievement of a common goal within a
multi-agent system by assessing the
contributions of individual agents’
policies and attributes.
Appl. Sci. 2024, 14, 8884 46 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability is needed to make sure that
ML models’ decision-making processes
line up with current knowledge on
Jeong et al. (2022) [103] Predicting Alzheimer’s disease dementia
Alzheimer’s disease and dementia. When
a model is proven to work accurately, it
can be implemented in concrete use cases.
The authors mentioned that the
explainability is important in their
specific application to facilitate human AI
collaboration towards perspective
analytics. And also, they mentioned that
Pereira et al. (2021) [208] Early prediction of student performance
by providing explanations for the
predictive model decisions, they can
support students, instructors, and other
stakeholders in understanding why
certain predictions were made.
The authors mentioned that it is essential
to understand the reasoning behind the
model’s decisions in CPSs because the
outcomes of machine learning models
can have significant impacts on safety,
Wickramasinghe et al. (2021) [286] Cyber-physical systems
security, and operations. And also they
mentioned that explainable unsupervised
machine learning methods are needed to
enhance transparency, trust, and
decision-making in CPS applications.
The authors mentioned that the
explainability is essential to providing
Bello et al. (2024) [241] Object detection and image analysis
insights into the decision-making process
of complex deep learning architectures.
White-box models often fail to express
complex chemical systems (like enzyme
Predicting minimum energy pathways in catalysis). Explainability is needed to
Song et al. (2022) [253]
chemical reaction enhance understanding of ML models
and assist in responsible
decision-making.
It has been shown that explanation
techniques are vulnerable to adversarial
attacks; one can change explanations
Safety of XAI, preventing adversarial
Tang et al. (2022) [338] without changing the model outcome.
attack on XAI system
Stability of explanations is important and
needs to be studied to achieve safer
ML/XAI systems.
The authors mentioned that
explainability is important in their
specific application to gain a better
understanding of the effects of process
and material properties on the variables
Carbon dioxide capture and
Al-Sakkari et al. (2023) [339] of interest. Further, they mentioned that
storage (CCS)
by adding explainability to accurate
models, it can provide insights into the
impact of different variables on the
measured variables, enhancing the
overall understanding of the system.
Appl. Sci. 2024, 14, 8884 47 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Automated decision-making systems that
utilize AI are not easily accepted in
healthcare because usually they aren’t
interpretable and therefore not
Predicting factors that influence hearing trustworthy. In healthcare and medical
Iliadou et al. (2022) [100]
aid use decision-making, accountability in case of
a wrong decision is a serious ethical
question when AI is used to make
decisions. Explainability is needed to
increase trust and transparency.
The authors mentioned that
explainability is important not only to
provide predictions but also to highlight
the variables driving the predictions.
Kwong et al. (2022) [46] Prostate cancer management Further, they mentioned that this
transparency helps build trust in the
model by ensuring that the predictions
and explanations align with
clinical intuition.
The authors mentioned that
explainability is important in their
application to enhance the interpretability,
Cyber threat intelligence (CTI) analysis
Ge et al. (2023) [297] reliability, and effectiveness of cyber
and classification
threat behavior identification through the
clear delineation of key evidence and
decision-making processes.
Explanations can give comprehensive
insight on prediction factors. This
Predicting dropout in faculty increases trust and also enables precise
Alcauter et al. (2023) [209]
of engineering actions on decreasing dropout rates in
engineering studies, where dropout rates
are high.
The authors mentioned that
explainability in their specific application
is important to help in understanding
and interpreting the decisions made by
the model, especially in the context of
diagnosing skin lesions. Further, they
Apostolopoulos et al. (2022) [340] Skin cancer detection and classification
mentioned that by visualizing the
important regions, the model’s
predictions can be better understood and
trusted, leading to more transparent and
interpretable results in the classification
of skin lesions.
Classifying pollen is a complicated ML
task because of complex data (chemical
structure of pollen, etc.), and intrinsically
Brdar et al. (2023) [254] Pollen identification interpretable models often fail in
performance. Explainability of black-box
models’s solutions is needed to create
trust towards AI systems.
Appl. Sci. 2024, 14, 8884 48 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important to provide insights into the
decision-making process of the deep
learning model. Also, they mentioned
Apostolopoulos et al. (2022) [340] COVID-19 detection
that transparency is also essential for
building trust in the model’s decisions,
ensuring accountability, and enabling the
actual users of the model to understand
and interpret the image findings.
The authors mentioned that
explainability in their specific application
is important to explain the relationship
between symptoms and the predicted
Henzel et al. (2021) [61] COVID-19 data classification
outcomes. And also to enhance the
interpretability of the models and
provide a transparent understanding of
how the classifiers make decisions.
Explanations in systems that detect facial
expressions give interpretability and
Deramgozin et al. (2023) [341] Facial action unit detection
deeper understanding about how the
model works.
The authors mentioned that
explainability in their specific application
is important to address the issue of model
interpretability. Further, they mentioned
Maouche et al. (2023) [43] Breast cancer metastasis prediction
that the increased complexity of the
models is associated with decreased
interpretability, which causes clinicians to
distrust the prognosis.
Researchers and other users need
explanations of ML-made decisions to
evaluate the model and to make correct
Zaman et al. (2021) [263] Control chart patterns recognition
final decisions. It is important to find
explainable and efficient ML systems that
do not require too many resources.
The authors mentioned that
explainability in their specific application
is important to enhance the confidence of
DNN-based solutions. They mentioned
that for autonomous systems operating in
Dassanayake et al. (2022) [342] Autonomous vehicles unpredictable environmental conditions,
the rationale behind the decisions made
by DNNs is essential for accountability,
reliability, and transparency, specifically
in safety-critical edge systems like
autonomous transportation.
The authors mentioned that
explainability in their specific application
Early detection of dementia in
McFall et al. (2023) [112] is important to selectively identify and
Parkinson’s disease
interpret early dementia risk factors in
Parkinson’s disease patients.
Appl. Sci. 2024, 14, 8884 49 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the existing
deep learning classifiers lack
transparency in interpreting findings,
which can limit their applications in
clinical practice. Further, they mentioned
Zhang et al. (2023) [56] Diagnosis of COVID-19
that providing explainable results (like
the proposed CXR-Net model to assist
radiologists in screening patients with
suspected COVID-19) is reducing the
waiting time for clinical decisions.
The authors mentioned that
explainability in their specific application
is important to gain insights, interpret
Qayyum et al. (2023) [343] Material property prediction model predictions, identify key factors
influencing the outcomes, and advance
material discovery in the field of
PZT ceramics.
The authors mentioned that
explainability in their specific application
is important to provide a physical
interpretation of the machine learning
model’s output. And also, they
Relaminarization events in wall-bounded
Lellep et al. (2022) [344] mentioned that the interpretability is
shear flows
crucial for understanding the underlying
physical processes driving
relaminarization events and gaining
insights into the dynamics of turbulent
flows close to the onset of turbulence.
Standard software suffers from noisy
data and unclear decision-making
Bilc et al. (2022) [345] Retinal nerve fiber layer segmentation processes. Explainability would enable
controlling the model’s learning process
and also validating the results.
Medical professionals tend not to trust
black-box models and therefore not use
Sakai et al. (2022) [346] Congenital heart disease detection them. Explainability would increase use
of AI systems by medical professionals
and therefore enhance their performance.
Explainability is needed to ensure
fairness and ethics of ML model-made
decisions and also to improve ML model
performance. Credit card frauds are
Terzi et al. (2021) [347] Credit card fraud detection
constantly evolving, and explainability
techniques give insight on ML models
and therefore can help detect new kinds
of attacks.
Explainability of ML models that predict
obesity can help detect features that effect
Allen (2023) [260] Obesity prevalence prediction obesity rates the most and therefore lead
to better decision-making in
obesity prevention.
Appl. Sci. 2024, 14, 8884 50 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important for sign language
recognition to address variability in sign
Kothadiya et al. (2023) [348] Sign language recognition
gestures, facilitate communication for
physically impaired individuals, and
enhance user trust and understanding of
the recognition model.
The authors mentioned that
explainability in their specific application
is important to promote trust and
understanding of machine learning
models in clinical practice, especially in
the medical field where decisions impact
Human gait analysis in children with
Slijepcevic et al. (2023) [349] patient care and outcomes. And also, they
Cerebral Palsy
mentioned that by examining whether
the features learned by the models are
clinically relevant, explainability ensures
that the decisions made by the models
align with the expertise and expectations
of healthcare professionals.
The authors mentioned that
explainability in their specific application
is important to enhance the reliability of
AI models, facilitate direct response to
Hwang et al. (2021) [350] Sensor fault detection
threats, and provide comprehensive
explanations for security experts to
ensure the safety of Industrial
Control Systems.
Classical techniques for explaining
regression models are often biased and
predict the arrivals at an model-specific. It is important to search
Rivera et al. (2023) [351]
emergency department for more generalizable and global
explaining techniques when AI is used in
increasing amounts in critical fields.
The authors mentioned that
explainability in their specific application
is important to understand the influence
of input features on NOx prediction.
Further, they mentioned that by
explaining the model’s decisions and the
Prediction of nitrogen oxides in
Park et al. (2023) [352] relationships between input and output
diesel engines
variables, the model becomes more
transparent and trustworthy in
applications where prediction accuracy
and feature importance are essential,
such as in the automotive industry for
developing low-carbon vehicles.
The authors mentioned that
explainability in their specific application
is important to comprehend model
Abdollahi et al. (2021) [353] Urban vegetarian mapping decisions, to grasp complicated inherent
non-linear relations, and to determine the
model’s suitability for monitoring and
evaluation purposes.
Appl. Sci. 2024, 14, 8884 51 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important for transparency in
decision-support systems to ensure that
Xie et al. (2021) [354] Air-traffic management
the AI/ML algorithms used in predicting
risks in uncontrolled airspace can be
understood and trusted by
human operators.
The authors mentioned that
explainability in their specific application
is important to enhance the
Cyber-physical attacks (use case: gas
Al-Hawawreh et al. (2024) [355] trustworthiness of the AI models and to
pipeline system)
contribute to performance improvements,
safety, audit capabilities, learning, and
compliance with regulations.
The authors mentioned that
explainability in their specific application
is important for understanding the
features that drive a model prediction,
which can potentially aid in
decision-making in complex healthcare
Laios et al. (2023) [50] Cancer prediction scenarios. They also mentioned that as
natural language processing moves
towards deep learning, transparency
becomes increasingly challenging,
making explainability essential for
ensuring trust and understanding in the
model’s predictions.
Predicting and identifying prostate cancer
is difficult because of complex indicators
of disease. Medical professionals would
benefit from using clinical decision
support systems for diagnosing prostate
Ramirez-Mena et al. (2023) [47] Prostate cancer prediction
cancer, but they often do not use them
because they are not interpretable and
trustworthy. XAI is needed to increase
trust and therefore allow the use of
diagnostic tools for cancer prediction.
State-of-the-art explanation techniques
often fail to visualize the whole structure
Srisuchinnawong et al. (2021) [356] Robotics
of a neural network (neural ingredients)
and also do not support robot interface.
Explainability increases trust and
Using classical statistical analysis understanding of AI systems and
Dai et al. (2023) [357]
methods for explaining NN model therefore enables AI system use in
clinical settings.
Remote sensing image scene classification
is a computationally demanding task,
and deep learning methods and neural
Feng et al. (2022) [300] Remote sensing image scene classification networks provide the computational
accuracy and efficiency needed. Lack of
explainability in those black-box methods
leads to distrust towards models.
Appl. Sci. 2024, 14, 8884 52 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability of their specific application
is important to understand the processes
underlying the observed data rather than
solely performing predictive tasks in the
context of spatial data modeling. And
Li (2022) [358] Ride-hailing service demand modeling
also, they mentioned that explainability is
essential for extracting spatial
relationships, visualizing them on maps,
and enabling analysts to understand and
interpret the spatial effects captured by
the machine learning models.
Explainability is important in AI systems
that are used in healthcare to ensure
Predicting COVID-19 disease from chest accuracy of models’ decisions and trust
Palatnik de Sousa et al. (2021) [57]
X-ray image and CT-scan towards models. Explanations can also
help detect different kinds of biases in
AI systems.
The authors mentioned that the
explainability of their specific application
is important to locate the combination of
factors necessary to correctly classify
Assessment of perceived stress in healthcare professionals based on their
Delgado-Gallegos et al. (2023) [62] healthcare professionals perceived stress levels. Further, they
attending COVID-19 mentioned that the decision tree model
served as a graphical tool that allowed
for a clearer interpretation of the factors
influencing stress levels in
healthcare professionals.
The authors mentioned that the
explainability of their specific application
is important to provide a human operator
Gonzalez-Gonzalez et al. (2022) [359] Industrial carbon footprint estimation
with an in-depth understanding of the
classification process and to validate the
relevant explanation terms.
The authors mentioned that the
explainability of their specific application
is important to ensure that users
understand and trust the system’s
predictions and decisions regarding
Elayan et al. (2023) [360] Power consumption prediction power consumption. Further, they
mentioned that users can gain insights
into the model’s behavior, biases, and
outcomes, and explainability increased
transparency and user confidence in
the system.
The authors mentioned that the
explainability of their specific application
is important and essential for building
trust in the AI model, facilitating
Duc Q Nguyen et al. (2022) [58] COVID-19 forecasting collaboration between AI systems and
human experts, and ultimately
improving the effectiveness of
decision-making processes in managing
the COVID-19 pandemic.
Appl. Sci. 2024, 14, 8884 53 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Researchers and users are having a
difficult time trying to understand
black-box models that are widely used
Cheng et al. (2022) [361] Deep learning models used in forestry due to their high performance and
efficiency. Explanations can give
otherwise not found hints about how the
ML model can be made better.
Studying alumni income and
socioeconomic status can help
educational institutions in improving
efficiency and planning the studies,
Gomez-Cravioto et al. (2022) [210] Alumni income prediction
which helps future graduates.
Explainability can give necessary insight
on important factors that influence the
success of the alumni.
Linear models are used in biological age
prediction because of their
interpretability, but they offer low
accuracy. Explainability of black-box
Qiu et al. (2023) [362] Biological age prediction
models is needed so more efficient and
accurate models can be used. Local
explanations are needed so ML models
can be used on individuals.
The authors mentioned that the
explainability of their specific application
is important to demonstrate the impact of
individual features on the model’s
Abba et al. (2023) [363] Water quality assessment
predictions and supports stakeholders
and decision-makers in making informed
choices regarding groundwater
resource management.
The authors mentioned that the
explainability of their specific application
is important to build the trustworthiness
of machine learning models, especially
Martinez-Seras et al. (2023) [364] Image classification
with Spiking Neural Networks, and it is
essential for ensuring the acceptance and
adoption of these models in
real-world settings.
The authors mentioned that the
explainability of their specific application
is important to enable domain experts
Krupp et al. (2023) [365] Tool life prediction from the field of machining to develop,
validate, and optimize remaining tool life
models without extensive machine
learning knowledge.
Appl. Sci. 2024, 14, 8884 54 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important for understanding the
features that drive a model prediction,
which can potentially aid in
decision-making in complex healthcare
Nayebi et al. (2023) [366] Clinical time series analysis scenarios. They also mentioned that as
natural language processing moves
towards deep learning, transparency
becomes increasingly challenging,
making explainability essential for
ensuring trust and understanding in the
model’s predictions.
A de-identification system without
explainability is unusable in the medical
domain because of critical data.
Explainability is needed to gain
Lee et al. (2022) [367] A de-identification of medical data
transparency and also assist in
developing better de-identification
models and modifying processes
connected to de-identification.
Mulberry is a culturally important plant
in the Himalayan area, and little to no
studies have been made to improve
mulberry leaf disease detection.
Explainability enables the use of AI
Nahiduzzaman et al. (2023) [368] Mulberry leaf disease classification
systems for disease detection by
mulberry farmers and enhances model
development. An explainable model
could also be used to detect disease from
other plants’ leaves.
The authors mentioned that the model’s
decisions and predictions can be
understood and interpreted by ensuring
that the explainable correct annotations
Khan et al. (2022) [369] Vision-based industrial applications by the proxy model. And also, they
mentioned that transparency in the
decision-making process is essential for
building trust in the model’s outputs in
industrial applications.
Explanations would validate and clarify
otherwise uninterpretable ML model
decisions. Explainability is needed so
Beucher et al. (2022) [370] Detecting acid sulfate in wetland areas that the results of research can be
communicated to both expert and
non-expert audiences. Explanations also
help build better ML models.
The authors mentioned that the
explainability in their specific application
Kui et al. (2022) [371] Disease severity prediction is important to help physicians
understand the decision-making process
of the ML model.
Appl. Sci. 2024, 14, 8884 55 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


AI methods cannot be implemented in
critical fields without a good
understanding of how models work.
State-of-the-art visual XAI techniques
don’t explain why important areas are
Szandala (2023) [372] Explaining ML model with saliency maps
important. Saliency maps with
information about the selection of
important areas give more
comprehensive insight about the ML
model’s decision-making.
The authors mentioned that in the
context of safety-critical systems,
explainability is essential for ensuring
Rengasamy et al. (2021) [373] Safety critical systems
transparency, accountability, and trust in
the decision-making process facilitated
by ML models.
The authors mentioned that practical
implications of their specific application
include improved inventory control,
reduced backorders, and enhanced
operational efficiency. Thus, by using
explainability, it empowers the
Jahin et al. (2023) [374] Supply chain management decision-making and efficient resource
allocation in supply chain management
systems. Further, they mentioned that
this transparency and interpretability are
essential for stakeholders to understand
the model’s predictions and trust its
recommendations.
Explaining and evaluating the
explanations given by XAI methods is
Nielsen et al. (2023) [375] Evaluating and explaining XAI methods
important to ensure model robustness,
faithfulness, and safety.
The authors mentioned that by using
explainability, they can get greater
transparency and understanding of the
relationship between the EEG features
and the model’s predictions. Further,
Brain computer interface system to they mentioned that this transparency is
Hashem et al. (2023) [376]
Analyze EEG signals essential for enhancing the
interpretability of the BCI systems in the
context of controlling diverse limb motor
tasks to assist individuals with limb
impairments and improve their quality
of life.
RNA data is complicated, and neural
network models have shown the best
performance in classification tasks, but
Classifying lncRNA and the neural network models lack
Lin et al. (2022) [377]
protein-coding transcripts interpretability. Explainability increases
understanding of the ML model’s
decision-making process in the RNA
classification task.
Appl. Sci. 2024, 14, 8884 56 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability of their specific application
is important to enable researchers and
practitioners to understand how ML
models work in order to strategically
Chen et al. (2023) [378] Land cover mapping and monitoring improve model performance for land
cover mapping with Google Earth Engine,
to support fine-tuning and optimizing
models, to help gauge trust in the models,
and to address the lack of explainability
in some parts of the scientific process.
In automatic target recognition, it is very
important to know that the ML model
learns to look at the right things (target)
Oveis et al. (2023) [379] Automatic target recognition in image data because when a new kind
of situation (new kind of truck/car/tank)
appears, the model has to be able to
recognize the target correctly.
The authors mentioned that
explainability is important in their
Designing porthole aluminium
Llorca-Schenk et al. (2023) [380] specific application to help when
extrusion dies
deciding the best way in which to adjust
an initial design to the predictive model.
The authors mentioned that the
explainability is essential in their
application of predicting employee
attrition as it helps in designing effective
Diaz et al. (2023) [381] HR decision-making
retention and recruitment strategies as
well as enhances trust, transparency, and
informed decision-making in human
resources management.
To enable the consideration of
interpretability, which is an extremely
important additional design driver,
Pelaez-Rodriguez et al. (2023) [382] Extreme low-visibility events prediction especially in some areas where the
physics of the problem plays a major role,
such as geoscience and Earth
observation problems.
To understand how deep learning models
An et al. (2023) [383] NA
make predictions.
To improve trust and adoption of
Anjara et al. (2023) [48] Oncolocy (lung cancer relapse prediction)
AI models.
To assist/help novice dental clinicians
Glick et al. (2022) [384] Dental radiography
(dental students) in decision-making.
To give insights into the mechanisms that
Qureshi et al. (2023) [385] Mosquito trajectory analysis may limit mosquito breeding and
disease transmission.
To reduce a high rate of false alarms in
cardiac arrest prediction models and to
Kim et al. (2023) [78] Cardiology
make their results clinically
(more) interpretable.
Appl. Sci. 2024, 14, 8884 57 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Alzheimer disease detection (from To discover the underlying relationships
Wen et al. (2023) [386]
patient transcriptions) between PoS features and AD.
To understand and explain the behavior
Alvey et al. (2023) [387] Aerial images analysis
of deep learning models.
To gain insight into how the model
Maaroof et al. (2023) [82] Diabetes prediction makes its predictions and build trust in
its decision-making process.
To produce improved filters for
Hou et al. (2022) [388] Image classification
preventing advanced backdoor attacks.
To allow data scientists and developers to
have a holistic view, a better
Mortality prediction of COVID-19
Nakagawa et al. (2021) [389] understanding of the explainable
patients (from healthcare data)
machine learning process, and to
build trust.
To explain how ML models predicting
Yang et al. (2022) [390] Process execution time prediction the time until the next activity in the
manufacturing process works.
Interpretability is essential for reliable
convolutional neural network (CNN)
O’Shea et al. (2023) [391] Lung tumor detection
image classifiers in radiological
applications.
To examine the contribution of features to
the decision-making process and to foster
Tasnim et al. (2023) [392] Cardiology
public confidence and trust in ML model
predictions.
Marques-Silva et al. (2023) [393] NA NA
To disclose/explain the decision-making
Lin et al. (2023) [274] Visual reasoning process from the numerous parameters
and complex non-linear functions.
Pedraza et al. (2023) [394] Sensor measurements To better understand the AI model.
To derive a mechanism of quantifying the
importance of words from the
Kwon et al. (2023) [395] NA
explainability score of each word in
the text.
Explainability is needed so that ML
models can be trusted. Explainability can
Integer linear programming and
also help detect biases and help improve
Rosenberg et al. (2023) [396] quadratic unconstrained binary
the ML model. Expressive boolean
optimization
formulas for explanations can increase
flexibility and interpretability.
Neural network models provide good
performance in simulating water quality,
but because of bad explainability, the
O’Sullivan et al. (2023) [397] Water quality modeling model’s decisions are hard to use to make
management decisions. Explainability
would increase trust and usability of
ANN models in water quality modeling.
Appl. Sci. 2024, 14, 8884 58 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability is important in their
specific application to obtain insights
Richter et al. (2023) [398] Radar-based target classification
about the decision-making processes of
the model and ensure the reliability and
effectiveness of their system.
Traffic sign recognition systems need to
be accurate and reliable, and that is why
neural network models are used. They
lack interpretability, which makes
Khan et al. (2024) [237] Traffic sign recognition detecting bias and evaluating model
performance difficult. Transparent and
safe systems in this kind of critical
application of ML models are
very necessary.
The authors mentioned that the
explainability is important in their
specific application to fully understand
the underlying process in the
classification because the classification
results could lead to harmful events for
Heimerl et al. (2022) [235] Emotional facial expression recognition
individuals. Further, they have
mentioned that the transparency and
interpretability of AI models are really
important in applications involving
sensitive information and
safety-critical scenarios.
Portable ultrasound devices are cheap
and very convenient, but they can give
Medical image noise reduction by noisy images. Explainability and
Dong et al. (2021) [399]
feature extraction identifying important features are crucial
for successful noise reduction with
feature extraction in the medical domain.
Explainability is needed in online
healthcare (medical metaverse) so
doctors’ can have more information
about patients’ status and therefore make
Murala et al. (2023) [400] Healthcare metaverse online model better medical decisions. Explainability
of AI-made medical decisions enhances
transparency, reliability, predictability,
and therefore safety, which benefits both
the doctor and the patient.
The authors mentioned that
explainability is important in their
specific application to improve
decision-making capability for
physicians, researchers, and health
officials at both patient and community
Brakefield et al. (2022) [401] Health surveillance and decision support
levels. Further, they mentioned that there
are many existing digital health solutions
that lack the ability to explain their
decisions and actions to human users,
which can hinder informed
decision-making in public health.
Appl. Sci. 2024, 14, 8884 59 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


End users can find it hard to trust a
black-box model and therefore end up
not using the model. Explainability
Predicting online purchase based on
Lee et al. (2021) [204] increases trust towards models and
information about online behaviour
enables reliable use of AI, which can be
very beneficial in the context of
online marketing.
In some applications of AI, explanations
are required by law or needed to ensure
ethnicity of decision-making. Inductive
Applying inductive logic programming
Ortega et al. (2021) [402] logic programming systems are
to explain black-box model
interpretable by design, and applying this
method to classical ML models enhances
their interpretability and performance.
Lack of explainability leads to users not
Producing clear visual explanations to trusting the ML model in critical
An et al. (2022) [403]
black-box models applications (like healthcare, finance,
and security).
The authors mentioned that the
explainability is important in their
specific application to interpret and
understand these emergent properties in
a more efficient way because existing
airport terminal operation models have
heavy computational requirements. And
also, they mentioned airport terminals
De Bosscher et al. (2023) [293] Airport terminal operations are involving complex systems, and
explainability helps to understand the
dynamics of these complex sociotechnical
systems. Further, they mentioned that
using explainability, they can identify
opportunities for optimization and
improvement in processes such as
passenger flow, security checkpoints, and
overall terminal efficiency.
Explainability in computer vision tasks is
important so the ML model can be
trusted and used safely. State-of-the-art
heatmap explanation methods
Huang et al. (2022) [404] Remote sensing scene classification
(CAM-methods) can give a good
explanation to a black-box model, but
they are not always accurate (failing to
detect multiple planes, for example).
The authors mentioned that the
explainability is important in their
specific application to make the machine
Senocak et al. (2023) [405] Precipitation forecast learning models more transparent,
interpretable, and aligned with domain
expertise. And also to enhance the
reliability and utility of the predictions.
Appl. Sci. 2024, 14, 8884 60 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability is important in AI
applications in the field of cybersecurity
because one mistake made by the model
can lead to a large amount of damage.
Explanations make the ML model
Kalutharage et al. (2023) [406] Anomaly detection
trustworthy and also enable better model
development. Detecting important
features can also increase the
performance of the intrusion
detection system.
The authors mentioned that the
explainability is important in their
specific application to provide a clear
understanding of the decision-making
process of the AI models. Further, they
mentioned that by providing
Sorayaie Azar et al. (2023) [407] Monkeypox detection
explainability, the clinicians can gain
deeper insights into how the AI models
arrive at their predictions, which is
crucial for fostering trust and confidence
in the reliability of AI systems in
real-world clinical applications.
The authors mentioned that explaining
the prediction is essential in medical
Di Stefano et al. (2023) [408] Early diagnosis of ATTRv Amyloidosis domains because the patterns a model
discovers may be more important than
its performance.
The authors mentioned that the
explainability is important in their
specific application of anomaly detection
in Industrial Control Systems (ICS)
because explaining the detection
outcomes and providing explanations for
anomaly detection results is essential for
ensuring that experts can understand and
trust the decisions made by the
Huong et al. (2022) [409] Industrial Control Systems (ICS)
model.The authors mentioned that the
explainability is important in their
specific application of anomaly detection
in Industrial Control Systems (ICS)
because explaining the detection
outcomes and providing explanations for
anomaly detection results is essential for
ensuring that experts can understand and
trust the decisions made by the model.
To understand how the technology
works, what its limits are, and what
Diefenbach et al. (2022) [410] Smart living room
consequences regarding autonomy and
privacy emerge.
To derive explanations along the spatial
Gkalelis et al. (2022) [265] Video event and action recognition and temporal dimensions for the event
recognition outcome.
To provide the transparency of the model
Patel et al. (2022) [411] Water quality prediction
to evaluate the results of the model.
Appl. Sci. 2024, 14, 8884 61 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


To make the prediction process of neural
Mandler et al. (2023) [292] Data-driven turbulence models network-based turbulence models
more transparent.
Kim et al. (2023) [412] Cognitive load prediction To detect important features.
To better understand the influence of all
Huelsmann et al. (2023) [270] Energy system design design parameters on the computed
energy system design.
For creating a wide acceptance of AI
Schroeder et al. (2023) [413] Predictive maintenance models in real-world applications and
aiding the identification of artifacts.
Bleeding detection (from streaming To reverse engineer the test results for the
Singh et al. (2022) [414]
gastrointestinal images) impact of features on a given test dataset.
Parkinson’s disease (PD) recognition For easier model interpretation in a
Pianpanit et al. (2021) [113]
(from SPECT images) clinical environment.
Khanna et al. (2022) [289] Assessing AI agents To be able to assess an AI agent.
To provide explanations for the
Performance prediction for deployment prediction outcomes of valid deployment
Kumara et al. (2023) [415]
configurable cloud applications variants in terms of the
deployment options.
To explain neural network decisions and
Konforti et al. (2023) [416] Image recognition
internal mechanisms.
Credit card fraud detection; customer To improve trust and credibility in
Ullah et al. (2022) [255]
churn prediction ML models.
To realize disparities in predictive
Prediction of brain tumors (from
Gaur et al. (2022) [115] performance, to help in developing trust,
MRI images)
and in integration into clinical practice.
To foster trust and accountability among
Al-Hussaini et al. (2023) [123] Seizure detection (from EEG)
healthcare professionals.
Oblak et al. (2023) [417] Fingermark quality assessment To make the models more transparent.
Sovrano et al. (2022) [219] Questions answering (as explaining) To make the models more transparent.
To overcome ML usability challenges,
such as lack of user trust in the model,
Child welfare screening (risk inability to reconcile human-ML
Zytek et al. (2022) [213]
score perdiction) disagreement, and ethical concerns about
oversimplification of complex problems
to a single algorithm output.
Quach et al. (2024) [247] Tomato detection and classification To assess model reliability.
Prediction of the progression of To enable physicians to explore and
Guarrasi et al. (2023) [59] COVID-19 (from images and understand data-driven
health record) DL-based system.
Le et al. (2021) [418] NA Not mentioned.
To assess the interpretability of the
solution showing the best performance
Glioblastoma multiforme identification and thus to take a little step further
Capuozzo et al. (2022) [419]
(from brain MRI images) toward the clinical usability of a
DL-based approach for MGMT promoter
detection in brain MRI.
Appl. Sci. 2024, 14, 8884 62 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


To explain the outcomes of the image
classification model and thereby enhance
Vo et al. (2023) [420] Dragon-fruit ripeness (from images)
its performance, optimization,
and reliability.
Artelt et al. (2022) [421] NA There is no specific application.
Abeyrathna et al. (2021) [422] NA There is no specific application.
The interpretable representation and
enormous speed-up allow one to produce
Krenn et al. (2021) [276] Experimental quantum optics solutions that a human scientist can
interpret and gain new scientific concepts
from outright.
To highlight the most relevant parts of
Pandiyan et al. (2023) [423] Laser powder bed fusion process
the input data for making a prediction.
To be able to evaluate familiarity ratings
of domain concepts more in-depth and to
underline the importance of focusing on
Assessment of familiarity ratings for domain concepts’ familiarity ratings to
Huang et al. (2023) [222]
domain concepts pinpoint helpful linguistic predictors for
assessing students’ cognitive engagement
during language learning or
online discussions.
To enhance the reliability of the
Jeon et al. (2023) [424] Land use (from satellite images)
image analysis.
There is no specific application; however,
the explainability is important because of
Fernandez et al. (2022) [264] NA the increasing number of applications
where it is advisable and even
compulsory to provide an explanation.
To improve the trust of the
Jia et al. (2022) [425] WiFi fingerprint-based localization
proposed method.
Munkhdalai et al. (2022) [426] NA There is no specific application.
To help users cooperate with AI systems
by addressing the challenge of opacity,
subjective information processing
Subjective information processing awareness (SIPA) is strongly correlated
Schrills et al. (2023) [295] awareness (in automated with trust and satisfaction with
insulin delivery) explanations; therefore, explanations and
higher levels of transparency may
improve cooperation between humans
and intelligent systems.
To overcome the dermatologist’s fear of
being misled by a false negative and the
Gouabou et al. (2022) [427] Melanoma detection assimilation of CNNs to a “black box,”
making their decision process difficult to
understand by a non-expert.
Trustworthiness and fairness have to be
Customer journey mapping automation established (by using XAI) in order for
Okazaki et al. (2022) [205]
(through model-level data fusion) the black-box AI to be used in the social
systems it is meant to support.
Appl. Sci. 2024, 14, 8884 63 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Using provided explanations, the
clinician may notice the color irregularity
Mridha et al. (2023) [41] Skin cancer classification in the dermatoscopic picture, which is
not evident on the lesion, and figure out
why the classifier predicted incorrectly.
Abeyrathna et al. (2021) [428] NA There is no specific application.
The Grad-CAM method has been used
for the authors to be sure that their
COVID-19 prediction (from lung CT method has used only the pixels from
Nagaoka et al. (2022) [429]
slice images) certain locations (where lungs are) of the
image used for classification; the
explanation itself is not important here.
Knowing the reasoning behind the
Misinformation detection (specifically
Joshi et al. (2023) [234] outcomes is essential to making the
COVID-19 misinformation; from texts)
detector trustworthy.
To maintain the transparency,
COVID-19 prediction (from
Ali et al. (2022) [430] interpretability, and explainability of
X-ray images)
the model.
To support (personalized)
Elbagoury et al. (2023) [431] Stroke prediction (based on EMG signals)
decision-making.
Explainability is necessary so
relationships between model inputs and
Human identification and
Yuan et al. (2022) [432] outputs can be identified. This is
activity recognition
necessary so the behavior of the proposed
method (fusion model) can be inferred.
Election algorithms are good at reducing
the complexity of HNN models, but it is
Explaining and improving discrete not known how they do that. It is known
Someetheram et al. (2022) [433]
hopfield neural network how it must reduce the complexity, so
explainability is needed to ensure the
models work the right way.
The authors mentioned that
understanding the decision-making
process of the CNNs is essential in
applications where human lives are at
stake, such as autonomous driving.
Sudars et al. (2022) [434] Traffic sign classification
Further, they mentioned that using
explainability can provide insights into
the inner workings of the CNN model for
improved transparency and trust in the
classification results.
The authors mentioned that the
explainability is important in their
specific application of anomaly detection
in Industrial Control Systems (ICS)
Altini et al. (2023) [269] Kidney tumor segmentation because explaining the detection
outcomes and providing explanations for
anomaly detection results is essential for
ensuring that experts can understand and
trust the decisions made by the model.
Explanations about the model’s decision
in the anomaly detection task enable the
Serradilla et al. (2021) [273] Predictive maintenance
operator to evaluate the model’s accuracy
and act based on their own expertise.
Appl. Sci. 2024, 14, 8884 64 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Black-box models are widely used in
studies and real-life applications of
malware detection, but black-box models
lack interpretability that could be used to
validate models’ decisions. Explainability
in the context of malware detection on
Aslam et al. (2023) [435] Malware detection Android devices has not been studied
much, and new information about attacks
on Android devices could be gained by
applying explainability to malware
detection models. Explainability also
would help users trust malware
detection models.
The authors mentioned that
explainability in their specific application
is important to increase reliability. And
also, they mentioned that as the
performance of both ML and DL models
Shin et al. (2023) [436] Network traffic classification
improves, the derivation process of the
results becomes more opaque,
highlighting the need for research on
transparent design and post-hoc
explanation for artificial intelligence.
The authors mentioned that the
explainability of their specific application
Samir et al. (2023) [437] Bug assignment and developer allocation
is important to increase user trust and
satisfaction with the system.
To reveal how the AI system is reasoning
and agree with it or not in an easier way.
Distinguish time series representing heart
Also, developers can unveil
Guidotti et al. (2021) [438] rate between normal heartbeat and
misclassification reasons and
myocardial infarction.
vulnerabilities and act to align the AI
reasoning with human beliefs.
For identifying the importance of features
Ekanayake et al. (2023) [439] Predict adhesive strength and elucidating the ML model’s
inner workings.
The authors mentioned that providing
easily interpretable explanations for
complex machine learning models and
Hendawi et al. (2023) [81] Diabetes prediction their outcomes is essential for healthcare
professionals to get a clear understanding
of AI-generated predictions and
recommendations for diabetes care.
Predict remaining useful life within For AI decisions to be audited, accounted
Kobayashi et al. (2024) [440]
intelligent digital twin frameworks for, and easy to understand.
Explanations support the decision maker
(user) to make the changes needed in the
multiobjective optimization task. The
Misitano et al. (2022) [271] Multiobjective optimization opaqueness of black-box methods is
problematic when these methods are
applied in critical domains (healthcare,
security, etc.).
Predict the geographic location of To assure stable and understandable
Leite et al. (2023) [282]
a vehicle rule-based modeling.
Appl. Sci. 2024, 14, 8884 65 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability is essential in their specific
application to enhance the reliability,
trust, and interpretability of deep
Varam et al. (2023) [248] Endoscopy image classification learning models for Wireless Capsule
Endoscopy image classification, which is
benefiting clinical research and
decision-making processes in the
medical domain.
There is very little research done about
explaining spike neural networks.
Explainability of these models is needed
so they can be understood better and
therefore improved more efficiently. Also,
Bitar et al. (2023) [441] Explaining spike neural network
developing model-specific explanation
tools for SNN models is beneficial
because model-specific tools are often
less computationally exhaustive than
model-agnostic XAI tools.
The authors mentioned that the
explainability is essential in their specific
application to understand the neural
representations of various human
behaviors and cognitions, such as
semantic representation according to
words, neural representation of visual
objects, or kinetics of movement. Further,
Kim et al. (2023) [442] Cerebral cortices processing
they have mentioned that the
explainability allows for a deeper
understanding of the cortical
contributions to decoding kinematic
parameters, which is essential for
advancing the study of neural
representations in different cognitive
processes and behaviors.
The authors mentioned that the
explainability is essential in their specific
application for transparency in the
medical field in pediatric urology, as it
Khondker et al. (2022) [443] Pediatric urology
allows clinicians to comprehend the
factors influencing the model’s decisions
and enhances confidence in the
model’s predictions.
The authors mentioned that the
explainability is essential in their specific
application to address the privacy risks
posed by concept-based explanations.
And also they mentioned the need to
Lucieri et al. (2023) [444] Biomedical image analysis investigate the privacy risk posed by
different human-centric explanation
methods such as Concept Localization
Maps (CLMs) and TCAV scores to
properly reflect practical
application scenarios.
Appl. Sci. 2024, 14, 8884 66 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability is essential in their specific
application to provide justifiable
decisions by reasoning what, why, and
how specific cybersecurity defense
decisions are made in a gaming context.
Further, they mentioned the transparency
Suhail et al. (2023) [445] Cyber-physical systems and interpretability given by the
explainability are helping in building
trust, confidence, and understanding
among stakeholders and finally leading
to more informed and effective
cybersecurity measures in the context of
Digital Twins (DTs) and Cyber-Physical
Systems (CPS).
The authors mentioned that the
explainability is essential in their specific
application to enable clinicians to
intervene prior to unplanned emergency
Predictive modeling for emergency department admissions. Further, they
George et al. (2023) [87] department admissions among mentioned that clinicians can better
cancer patients understand the factors influencing the
risk of ED visits using explainability, and
it leads to more informed
decision-making and potentially
improved patient outcomes.
AI systems for sentiment analysis have a
great effect on the real world because
sentiment analysis is usually used to
Bacco et al. (2021) [446] Sentiment analysis analyze customer behavior or public
opinion. Explainability is needed to
ensure the models make ethical and
rightful decisions.
Several kinds of biases are prevalent and
hard to detect when detecting fake news
with AI due to the fact that fake news
spreads on social media. Explainability is
Szczepanski et al. (2021) [229] Fake news detection needed to gain understanding of the
model’s decision process and therefore
prevent biases. Explainability increases
trust and therefore enables wider use
of AI.
Explainability can lead to new knowledge
Classifying functional connectivity for
Dong et al. (2021) [256] about aging and using brain-computer
brain-computer interface system
interface systems on elderly people.
The authors mentioned that the
explainability is essential in their specific
application to ensure that the AI model’s
El-Sappagh et al. (2021) [108] Alzheimer’s disease prediction
decisions are transparent,
understandable, and actionable for
clinical practice.
Lack of explainability makes AI methods
challenging to implement in real-life use
Prakash et al. (2023) [447] Electrocardiogram beat classification
cases. Explainability increases trust, user
performance, and user satisfaction.
Appl. Sci. 2024, 14, 8884 67 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability builds trust in the AI
model in their context as well as ensures
Alani et al. (2022) [448] Malware detection
that the high accuracy originates from
explainable conditions rather than from a
black-box operation.
Explainability could give new
information about the importance of
Metabolic stability and CYP different physicochemical parameters.
Sasahara et al. (2021) [126]
inhibition prediction This knowledge can be used to design
better drugs and to understand
underlying structures better.
Understanding an AI model’s decision
Maloca et al. (2021) [449] Classify medical (retinal OCT) images process will provide confidence and
acceptance of the machine.
The authors mentioned that the
explainability in their specific application
is essential for facilitating human
Tiensuu et al. (2021) [258] Stainless steel manufacturing decision-making, early detection of
quality risks, and conducting root cause
analysis to improve product quality and
production efficiency.
The authors mentioned that
explainability may become a
fundamental requirement in their domain
Valladares-Rodriguez et al. (2022) [73] Cognitive impairment detection
and tasks, such as detecting MCI, to
improve transparency and
interpretability of AI-based decisions.
The authors mentioned that the
explainability is important in their
specific application to provide persuasive
discharge information, such as the
expected individual discharge date and
risk factors related to cardiovascular
Ahn et al. (2021) [450] Hospital management and patient care
diseases. Further, they have mentioned
that this explainability can assist in
precise bed management and help the
medical team and patients understand
the conditions in detail for better
treatment preparation.
The authors mentioned that the
explainability is important in their
specific application to uncover and
Hammer et al. (2022) [451] Brain Computer Interfacing (BCI) understand how functional specialization
emerges in artificial deep convolutional
neural networks during a brain-computer
interfacing (BCI) task.
Explainability increases trust toward ML
model due to its ability to justify
model-made decisions. Explainability can
Ikushima et al. (2023) [452] Age prediction based on bronchial image
also reveal new information and
connections that have not been
discovered otherwise.
Appl. Sci. 2024, 14, 8884 68 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
to provide insights into the
decision-making process of the machine
learning models to domain experts.
Further, they mentioned that this
Kalir et al. (2023) [453] Semiconductor manufacturing
transparency in model predictions helps
in building trust in the AI systems and
aids in decision-making processes related
to capacity, productivity, and cost
improvements in semiconductor
manufacturing processes.
Explainability can give new information
about features that predict cardiovascular
Shin et al. (2022) [454] Cardiovascular age assessment aging. Explainability also helps evaluate
model performance and improve
the model.
The authors mentioned that
explainability in their specific application
is important to build trust and
Chandra et al. (2023) [455] Soil fertility prediction
transparency, to enhance the
decision-making process, and to provide
user-friendly interpretation.
The authors mentioned that
explainability in their specific application
is important to understand the relevance
of spectral features for optical water
types. Further, they mentioned that
explainability provides insights into
which variables were affecting each
Blix et al. (2022) [456] Water quality monitoring derived water type. And also, they
mentioned that this understanding is
essential for improving the estimation of
chlorophyll-a content through the
application of preferred in-water
algorithms and improving the accuracy
and interpretability of water quality
monitoring processes.
The authors mentioned that
explainability in their specific application
is important to address the black box
problem associated with deep learning
methods in medical diagnosis. And also,
they mentioned that the lack of semantic
Resendiz et al. (2023) [238] Cancer diagnosis associations between input data and
predicted classes in deep learning models
can hide the interpretability, which can
lead to potential risks when applying
these systems to different databases or
integrating them into routine
clinical practice.
Appl. Sci. 2024, 14, 8884 69 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


ML models can behave unpredictably
when new data is used. Explainability is
needed to be able to evaluate and justify
Topp et al. (2023) [457] Predicting water temperature change the model’s performance and decisions.
Explanations are used to evaluate the
fidelity and generalizability of the
ML model.
Explainability in ML models used in
healthcare is needed to ensure trust
toward the model because the IT
knowledge of healthcare professionals is
Till et al. (2023) [458] Wrist fracture detection often limited. Trading predictive
performance to explainability
(black-box–white-box models) is
problematic, which makes explaining
black-box models important.
Variables used in flood prediction can
have some complexity, and explainability
Aswad et al. (2022) [459] Flood prediction
is needed to evaluate model performance
and understand the decisions.
Explainability enhances understanding of
a model’s decision-making process, but it
can also be used to improve model
performance by selecting only important
Kalyakulina et al. (2023) [71] Predicting immunological age features in computing. This reduces the
cost of using the ML model and therefore
makes it more usable. Local explanations
are necessary to personalize treatments
when needed.
Explanations give practical insight on
previously theoretically studied energy
generation patterns. Explanations give
important information about
Ghosh et al. (2024) [460] Predicting energy generation patterns
relationships and depencies between
different features that effect energy
production, especially when clean energy
is being focused on.
Medical practitioners using ML models
in medical decision-making need
explainability to be able to evaluate and
Predicting reduced left ventricular
Katsushika et al. (2023) [75] validate model-made decisions. Without
ejection fraction (LVEF)
explainability, medical practitioners
cannot utilize ML models to help
their work.
Explainability enables getting
information about important features and
also about the model (preprocessing,
Predicting Alzheimer’s disease and mild
Hernandez et al. (2022) [107] feature selection, methods).
cognitive impairment
Explainability is important for validating
model-made decisions with domain
knowledge from the user.
Appl. Sci. 2024, 14, 8884 70 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


That the predicted results will be more
informative and trustworthy to the urban
Predicting the land use/land
Mohanrajan et al. (2022) [461] planners and forest department to take
cover changes
appropriate measures in the protection of
the environment.
Explainability is needed to understand
relationships and
qualitative/quantitative impacts of input
Zhang et al. (2023) [462] Strain prediction parameters in modeling mechanical
properties. Explainability can also reveal
new information about the effects of
stress and strain on each other.
Explainability is important so that
medical practitioners can communicate
their and ML model-made decisions to
patients, therefore ensuring patient
autonomy and informed consent.
ML-based decision support in
Wang et al. (2023) [463] Explanations also help develop better ML
medical field
models by revealing the model’s
decision-making process. Here
explainability is used to evaluate the
effect of feature selection on
model performance.
The authors mentioned the need for
transparency and human
understandability in the reasoning of the
Pierrard et al. (2021) [464] Medical imaging model in critical scenarios where
decisions based on image classification
and annotation can have
significant consequences.
Explainability is needed to ensure
Praetorius et al. (2023) [465] Detecting intramuscular fat generalizability of the ML model used in
intramuscular fat detection.
Explainability is important so reasons
behind (predicted) withdrawal from legal
process can be recognized. Explainability
can give new knowledge about features
Predicting withdrawal from legal process that affect participation and therefore
Escobar-Linero et al. (2023) [215] in case of violence towards woman in help taking care of intimate relationship
intimate relationship violence victims. The data from legal
cases of intimate relationship violence is
quite complex, and explainability is
needed to interpret the black-box models
needed to use in this prediction task.
The authors mentioned that the
explainability in their specific application
is important to enhance the usability,
Pan et al. (2022) [466] Biometric presentation attack detection
security, and performance of their Facial
Biometric Presentation Attack
Detection system.
Appl. Sci. 2024, 14, 8884 71 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
understanding the decisions made by
models is crucial for building trust and
credibility in their predictions in the
context of drug discovery, where
interpretability is essential for inferring
Wang et al. (2023) [467] Drug discovery applications target properties of compounds from
their molecular structures. Further, they
mentioned that explainability in drug
design is a way to leverage medicinal
chemistry knowledge, address model
limitations, and facilitate collaboration
between different experts in the field.
The authors mentioned that explaining
model decisions from medical image
inputs is essential for deploying ML
models as clinical decision assistants.
Further, they mentioned that providing
explanations helps clinicians understand
Jin et al. (2023) [252] Medical image analysis the reasoning behind the model’s
predictions. And also, they mentioned
that explainability is essential in their
specific application to enhance the
transparency, trustworthiness, and utility
of ML models in the context of
multi-modal medical image analysis.
Black-box models cannot be used reliably
and effectively by engineers in practice
because they don’t provide explanations
Evaluating fire resistance of about their decision-making process.
Naser (2022) [468]
concrete-filled steel tubular columns Explainability is also needed to ensure
liability and fairness in the fire
engineering domain because human lives
and legal aspects are involved.
Explainability is needed to ensure
Anomaly detection from open accountability, transparency, and
Karamanou et al. (2022) [469]
governmental data interpretability of black-box ML models
used in unsupervised learning tasks.
Explainability is needed to evaluate the
Predicting the wave transmission
Kim et al. (2021) [470] performance and decision-making
coefficient of low-crested structures
process of the ML model.
The authors mentioned that
explainability is important in their
specific application to ensure
transparency, accountability, and
actionable insights for both students and
educators. Further, they mentioned the
General Data Protection Regulation
Saarela et al. (2021) [211] Student agency analytics
(GDPR) includes a right for explanation,
and for that, automatic profiling must be
used in a Learning Analytics (LA) tool.
And also, they mentioned explainability
can help teachers increase their
awareness of the effects of their
pedagogical planning and interventions.
Appl. Sci. 2024, 14, 8884 72 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


White-box models are widely used in the
medical field because of their high
interpretability despite their low
Gong et al. (2022) [471] COVID-19 detection performance. Explainability of back-box
models is needed to enable the use of
more effective ML models in the
medical domain.
The authors mentioned that
explainability is important in their
specific application to provide insights
into the model’s behavior and facilitate
the interpretation of the relationships
between input parameters and
Burzynski (2022) [472] Battery health diagnosis predictions. Further, they mentioned that
this way enables a better understanding
of the model’s decision-making process
and enhances the trustworthiness of the
predictions, which is essential for
optimizing battery management systems
and extending battery life.
Ambient particle matter forecast experts
tend to question the reliability of ML
Forecasting particle matter of
Kim et al. (2022) [473] models and validity of their predictions.
aerodynamic diameter less than 2.5 µm
Explainability is needed to increase trust
towards black-box models.
The authors mentioned that
explainability is important in their
specific application to align the
decision-making process with that of
human radiologists and to provide clear,
Galiger et al. (2023) [474] Histopathology tissue type detection human-readable justifications for model
decision-making. Further, they
mentioned explainability is essential for
gaining trust in the model’s decisions and
ensuring its reliability in the medical
imaging domain.
The proposed ensemble method for
malware prediction is quite complex, and
Naeem et al. (2023) [475] Malware detection explainability is needed to enable
interpretation and validation of
model-made decisions.
The authors mentioned that
explainability is important in their
specific application to understand and
interpret the results of machine learning
and deep learning models applied to
lithium-ion battery datasets. Further, they
Burzynski (2022) [472] Battery health monitoring mentioned that using XAI researchers can
gain insights into the outcomes produced
by the algorithms, describe the model’s
accuracy, fairness, transparency, and
results in decision-making, and
investigate any biases in
predicted results.
Appl. Sci. 2024, 14, 8884 73 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


People tend to not accept ML systems
that might be accurate and efficient if
Uddin et al. (2021) [476] Human activity recognition they lack interpretability. Explainability is
needed to gain trust toward ML models
and allow use of more efficient models.
The authors mentioned that
explainability is important in their
Sinha et al. (2023) [477] Fault diagnosis of low-cost sensors specific application to increase the trust
and reliability of the AI model used for
fault diagnosis of low-cost sensors.
Explainability is needed to validate
ML-made decisions and detect biases.
Explainability allows the use of more
Jacinto et al. (2023) [478] Mapping karstified zones complex models when interpretability is
necessary. Explainability gives
information about relationships between
model inputs and outputs.
The authors mentioned that by providing
explanations in their context, experts can
Anomaly detection in asset understand the reasoning behind the
Jakubowski et al. (2022) [479]
degradation process model’s decisions and ensure the
reliability of the predictive maintenance
actions taken based on those decisions.
The authors mentioned that providing
explanations for the model’s predictions
is essential to improving the trust and
Intelligent fault diagnosis in understanding of the diagnostic process.
Guo et al. (2024) [480]
rotating machinery And also, they mentioned explainability
helps in validating the diagnostic results
and improving the generalization ability
of the model in unseen domains.
The authors mentioned that
explainability in their specific application
is important because it helps in
understanding the decision-making
process of the model and the rationale
Age-related macular behind its classifications. And also, they
Shi et al. (2021) [481]
degeneration diagnosis mentioned explainability helps to
maximize its clinical applicability for the
specific task of geographic atrophy
detection, and clinicians trust the model’s
predictions and integrate them into their
decision-making process.
Explanations are not always interpretable
or reliable; do not provide information
that is well related to the application
Wang et al. (2023) [127] Drug repurposing domain. Explanations need to connect
well to the problem they explain so that
reliable interpretations and decisions can
be made.
Appl. Sci. 2024, 14, 8884 74 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability enables evaluating training
processes and model decisions when
using AI in factory layout planning.
Explainability also enhances trust
Klar et al. (2024) [290] Factory layout design towards decisions. Explainability reveals
relationships and importance of features
and therefore can give valuable
information that can be used later in the
factory layout design process.
Explainability is important so model
decisions can be evaluated and justified.
Explainability can also help improve
model performance and possibly give
Panos et al. (2023) [482] Predicting solar flares
new information about solar flares and
features that predict them because of the
high diagnostic capabilities of
spectral data.
Explainability helps to make decisions on
evacuations and interventions in
landslide areas in an effective and ethical
Fang et al. (2023) [483] Predicting landslide
way. Explanations can also help identify
the need for a specific intervention (slope
stabilization, for example).
Explainability is needed to allow
interpretation of model-made decisions
Karami et al. (2021) [484] Predicting response to COVID-19 virus
and to find information about
connections between features.
The authors mentioned that
explainability is important in their
specific application to understand how
Baek et al. (2023) [485] Semiconductor equipment production
deep learning algorithms make decisions
due to their complexity and to explain
the outputs.
Because clinicians are only willing to
Attention deficit hyperactivity disorder adopt a technological solution if they
Antoniou et al. (2022) [283]
(ADHD) diagnosis understand the basis of the
provided recommendation.
The authors mentioned that
explainability is important in their
specific application to enhance trust,
Nguyen et al. (2022) [486] Decision-making agents
ensure legal compliance, improve user
understanding, and increase user
satisfaction in their specific application.
In ML tasks in the healthcare domain, it is
usual that the model has to make
predictions on data that it hasn’t seen
before. Model-made decisions in this
kind of use case have to be explainable so
Solorio-Ramirez et al. (2021) [119] Predicting brain hemorrhage
the decision can be evaluated and
justified. Explainability increases
transparency and therefore
understanding of the ML model’s
decision-making process.
Appl. Sci. 2024, 14, 8884 75 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Identifying emotions from speech data is
a complex task, and complex models are
needed to achieve appropriate results.
de Velasco et al. (2023) [221] Identifying emotions from speech Explainability is needed to increase
understanding of computational methods
and models’ decision-making processes
in this use case.
Electric vehicles and their batteries are
constantly evolving and can be very
different, which makes developing a
Predicting state of battery charge in
Shahriar et al. (2022) [487] globally applicable model for state of
electric vehicle
charge estimation difficult. Explainability
is needed for evaluating and improving
model performance.
The authors mentioned that
explainability is important in their
specific application to provide
transparency on how the ML model
produces its predictions. Further, they
Kim et al. (2023) [488] Maritime engineering mentioned that using XAI, they can get a
clear understanding of how different
predictors influence the outcome of the
prediction regarding vessel shaft power,
which is essential for decision-making
processes in the shipping industry.
The authors mentioned that
explainability is important in their
specific application to support medical
decision-making for individual patients,
Lemanska-Perek et al. (2022) [489] Sepsis management
such as to better understand the model
predictions, identify important features
for each patient, and show how changes
in variables affect predictions.
Healthcare professionals tend not to trust
easily black-box models, which could be
Minutti-Martinez et al. (2023) [490] Classifying chest X-ray images fixed by utilizing explainability methods.
Explainability is also legally required in
the healthcare domain.
Lack of explainability makes
well-performing ML models useless in
the healthcare domain. Explainability is
Predicting chronic obstructive pulmonary needed to ensure interpretability and
Wang et al. (2023) [101]
disease (COPD) transparency, which leads to a wider
application of ML in healthcare.
Explainability also helps detect biases
and improve model performance.
The authors mentioned that the use of AI
with explainability for fracture diagnosis
has the potential to serve as a basis for
specialist diagnosis. And also, they
Kim et al. (2023) [491] Medical imaging for fracture detection mentioned that AI could assist specialists
by offering reliable opinions, preventing
misinterpretations, and also speeding up
the decision-making process
for diagnosis.
Appl. Sci. 2024, 14, 8884 76 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
specific application to ensure that the AI
Medical data management; cancer
Ivanovic et al. (2023) [88] models are not only accurate but also
patient case
transparent, trustworthy, and
interpretable for the end users in the
medical and healthcare domains.
The authors mentioned that
explainability is important in their
specific application, Deep Reinforcement
Sullivan et al. (2023) [227] Deep Q-learning experience replay Learning (DRL), because the lack of
transparency in DRL models leads to
challenges in debugging and interpreting
the decision-making process.
The authors mentioned that
explainability helps in identifying
Humer et al. (2022) [492] Drug discovery
chemical regions of interest and gaining
insights into the ML model’s reasoning.
The authors mentioned that the
explainability of their specific application
is important to provide a more intuitive
and comprehensive explanation of
decision-making for power systems with
Zhang et al. (2023) [493] Power systems dispatch and operation complex topology. Further, they
mentioned that this is essential for
operators to obtain noteworthy power
grid areas as the basis of auxiliary
decision-making to realize efficient and
accurate control.
Explainability is needed in industry
machinery health assessment systems to
increase reliability, allow evaluation of
Yang et al. (2023) [494] Machinery health prediction
the model’s decision-making process, and
help the end user understand and trust
the model.
Explainability is legally required in ML
models used in the healthcare domain,
and because complex models are needed
Nuclei classification from breast because of their high performance,
Altini et al. (2023) [269]
cancer images explainability techniques need to be used.
Explainability also reveals important
features and therefore increases
interpretability and usability.
Explainability is needed so medical
Predicting coronary artery disease from
Papandrianos et al. (2022) [64] professionals can verify the
myocardial perfusion images
model’s decisions.
Explainability is needed to evaluate the
model in cases of wrong decisions and to
Liang et al. (2021) [230] Identifying deceptive online content help develop model that are more reliable
against targeted attacks when using ML
to detect deceptive text/content.
Appl. Sci. 2024, 14, 8884 77 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability is needed to ensure
reliability of the model in addition to
performance metrics. Medical
Remote prognosis of the state of intensive
Alabdulhafith et al. (2023) [60] professionals need explanations to
care unit patients
evaluate models’ decisions and their
medical relevance to be able to use ML
models in practice.
Lack of explainability leads to lack of
trust, and trusting ML models without
explanations leads to a lack of
Zolanvari et al. (2023) [299] Intrusion detection
applicability and legitimacy.
Explainability is needed to ensure
transparency and applicability.
The authors mentioned that the
explainability of their specific application
is important to provide transparency and
understanding of the prediction process.
Further, they mentioned that
Carta et al. (2021) [279] Stock market forecasting
explainability allows for a deep
understanding of the obtained set of
features and provides insights into the
factors influencing the stock market
forecasting results.
The authors mentioned that the
explainability of their specific application
is important to improve the
Esmaeili et al. (2021) [117] Brain tumor localization interpretability, transparency, and
reliability of deep learning models in the
context of tumor localization in
brain imaging.
The authors mentioned that explainability
is important in their specific application
because models in the healthcare domain
require being transparent and
interpretable. And also, they mentioned
Cheng et al. (2022) [495] Healthcare predictive modeling
clinicians may not have technical expertise
in machine learning, and therefore
explanations need to be provided in a way
that aligns with their domain knowledge
rather than technical details.
The authors mentioned that
explainability is important in their
specific application because to
understand the mechanics behind the
methods applied for increasing trust and
accountability in the context of retrofit
Wenninger et al. (2022) [294] Building energy performance prediction
implementation where uncertainty is a
major barrier. Further, they mentioned
that explainability provides insights for
experts on the influence of various
building characteristics on the final
energy performance predictions.
Appl. Sci. 2024, 14, 8884 78 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
Laqua et al. (2023) [496] E-bikes specific application to enhance the
understanding of the user experience of
e-bike riding.
The authors mentioned that
explainability is important in their
specific application for model
interpretability, validation, feature
Espinoza et al. (2021) [268] Antibiotic discovery
selection optimization, and advancing
scientific discovery in the context of
predicting antimicrobial mechanisms of
action using AI models.
The authors mentioned that deep
learning models are often considered
“black boxes”, which can have challenges
regarding transparency and potential
Sanderson et al. (2023) [497] Flood inundation mapping ethical biases. Using XAI to flood
inundation mapping, they aimed to
enhance insight into the behavior of their
proposed deep learning model and how it
is impacted by varying input data types.
The authors mentioned that by
incorporating explainability into their AI
model, they can provide understandable
explanations to physicians and make
informed decisions based on the AI’s
Estimating pathogenicity of estimation results and genomic medical
Abe et al. (2023) [498]
genetic variants knowledge. And also, they mentioned
this approach eliminates the bottlenecks
in genomic medicine by combining high
accuracy with explainability and
supporting the identification of
disease-causing variants in patients.
The authors mentioned that there is a
growing need for explainable AI
Kerz et al. (2023) [499] Mental health detection approaches in psychiatric diagnosis and
prediction to ensure transparency in the
decision-making process.
The authors mentioned that
explainability in their specific application
is important to improve the reliability of
AI-based systems by providing visual
explanations of predictions made by
black-box deep learning models. And
Satellite image analysis for environment also, they mentioned explainability helps
Kim et al. (2022) [500]
monitoring and analysis in preventing critical errors, especially
false negative errors in image selection,
and by providing visual explanations, the
system can be refined based on
supervisor feedback, which can reduce
the risk of misinterpretation or
incorrect predictions.
Appl. Sci. 2024, 14, 8884 79 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explainability is necessary to enable the
use of complex and high-performing
models in predicting water quality,
Thrun et al. (2021) [501] Water quality prediction because domain experts usually are not
familiar with AI. They need interpretable
and clear explanations to evaluate and
trust the model’s decisions.
Explainability is necessary to ensure
users’ trust toward the ML model and to
help users understand the ML model
better. Different XAI methods perform
Gowrisankar et al. (2024) [231] Detecting deepfake images differently (especially saliency map
techniques), and therefore efficient XAI
evaluation techniques are needed to help
find the most accurate and interpretable
XAI technique.
The ML model used in weather
prediction does not give information
about the contributions of different
Beni et al. (2023) [502] Predicting weathering on rock slopes
features. Explainability is needed to gain
insight on model performance and
therefore evaluate the model’s decisions.
A ML model often has to deal with
unseen data in arrythmia classification
task, and explainability is needed to
evaluate model performance and
decisions in these cases. Healthcare
professionals tend not to trust AI-based
Singh et al. (2022) [84] Arrhythmia classification
diagnostic tools, and explainability
would increase trust towards ML models
and therefore enable use of AI diagnostic
tools. In the healthcare domain,
explainability is also necessary in an
ethical and legal sense.
Explainability is needed for accessing
information about physical processes and
mechanisms learned by the ML model.
Predict dissolved oxygen concentrations
Zhou et al. (2023) [503] Data from karstic areas is often complex,
in karst springflow
and explainability is therefore even more
necessary for evaluating
model performance.
Brain image data is complex, and models
that make predictions based on those
Maqsood et al. (2022) [118] Brain tumor detection images need to be complex. Explaining
the model provides more information
about the model.
Explainability allows users to understand
how the model answers questions, which
Cui et al. (2022) [287] Machine reading comprehension
can be very helpful for
educational purposes.
Appl. Sci. 2024, 14, 8884 80 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important for obtaining information
about potential blockages of
transportation vehicles, enabling
monitoring and inspection to prevent
delays or process restarts in advance.
Barros et al. (2023) [504] Cement industry dispatch workflow They also mentioned that explainability
helps avoid security issues such as
violations of federal regulations on
vehicle weight. Also, in the context of
finances, they mentioned explainability
assists in preventing orders from being
sent in quantities greater than requested,
and it helps to avoid monetary losses.
Explainability helps medical
professionals understand ML-made
diagnoses and use them as diagnostic
Recognizing and classifying tools. Explainability enables more
Kayadibi et al. (2023) [505]
retinal disorders accurate, efficient, and reliable diagnosis
because of the necessary human
evaluation step and the complex nature
of retinal data.
The authors mentioned that
explainability is important in fruit
classification because it can enhance
processes such as sorting, grading, and
packaging, reducing waste and
increasing profitability. Further, they
Qamar et al. (2023) [506] Fruit classification mentioned that by using explainability,
they can enhance the transparency and
interpretability of the models used in
automated fruit classification systems,
and it improves trust, identifies biases,
meets regulatory requirements, and
increases users’ confidence in the system.
The authors mentioned that
explainability is important in their
specific application because it can
provide insights into the inner workings
Multi-agent systems for
Crespi et al. (2023) [507] of the learned strategies, facilitate human
military operations
understanding of agent behaviors, and
enhance transparency and trust in the
decision-making processes of the
multi-agent system.
The authors mentioned that
explainability is important in their
specific application to ensure that the
system is trusted and easily adopted by
Sabrina et al. (2022) [508] Optimizing crop yield
farmers. Further, they mentioned that
this explainability is essential for making
the system understandable, trustworthy,
and user-friendly for farmers.
Appl. Sci. 2024, 14, 8884 81 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
specific application to improve model
credibility and provide insights into the
factors influencing runoff predictions.
Wu et al. (2023) [509] Flood prediction
Further, they mentioned that
explainability is essential for
understanding the complex relationships
between meteorological variables and
runoff dynamics.
The authors mentioned that
explainability is important in their
specific application to identify concrete
disease prevention methods at the
individual level. They also mentioned
Nakamura et al. (2023) [510] Disease prevention
explainability is essential for setting
intervention goals for future disease
development prevention and improving
outcomes through targeted health
condition improvements.
Explainability is needed to gain insight
into the ML model’s reasoning and
decision-making process. Explainability
can also help develop better and more
Damian et al. (2022) [232] Detecting fake news
effective models by revealing the most
important features, which is important
with text data, where there are thousands
of features (aka different words).
The authors mentioned that
explainability is important in their
Oh et al. (2021) [511] Glaucoma Ddagnosis specific application to provide a basis for
ophthalmologists to determine whether
to trust the predicted results.
The authors mentioned that
explainability is important in their
specific application to get a better
understanding of how the model reaches
its decisions. And also they referred to
the phenomenon of “Clever Hans”
predictors, where models might perform
well on training and test datasets but fail
Borujeni et al. (2023) [512] Air pollution forecasting
in practical scenarios. Thus they
mentioned that by understanding how
the model makes decisions, it is possible
to identify instances where the model
may be relying on incorrect criteria for
predictions. And also, they mentioned
explainability is essential for efficient
feature selection and model optimization.
The authors mentioned that
explainability is important in their
Unmanned aerial vehicle specific application to ensure the safe,
Alharbi et al. (2023) [513]
(UAVs) operation efficient, and equitable allocation of
airspace system resources in
UTM operations.
Appl. Sci. 2024, 14, 8884 82 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


In the pneumonia classification
application of ML, explainability is
needed to gain insight about important
features that affect the classification of
pneumonia. Explainability is also needed
Sheu et al. (2023) [514] Pneumonia prediction to convince medical professionals about
the model’s reliability and therefore to
gain acceptance from the medical domain.
Users need to be able to interpret and
trust the ML model in order to use it
efficiently in practice as a diagnostic tool.
Time series data in predictive
maintenance is complex and hard to
interpret. Explainability is needed to
Solis-Martin et al. (2023) [291] Predictive maintenance
understand the ML model and
relationships between inputs and
outputs better.
Explainability increases the reliability of
ML models. In drug repurposing tasks,
Castiglione et al. (2023) [128] Drug repurposing
explainability is also mandatory to ensure
transparency and accountability.
The authors mentioned the explainability
in their specific application is important
to enhance the interpretability of ML
models, generate confidence in the
Antepartum fetal monitoring and risk
Aslam et al. (2022) [515] predictions, add to comprehensibility,
prediction of IUGR
and assist doctors in their
decision-making process regarding
antepartum fetal monitoring to predict
the risk of IUGR.
Explainability is needed to gain insight
on reasons behind predicted faults and to
Peng et al. (2022) [516] Fault detection and diagnosis
ensure model performance with complex
data and possible online/offline use.
The authors mentioned that the
explainability in their specific application
is important to provide a causal
explanation in the ML models, and
Na Pattalung et al. (2021) [517] Critical care medicine for ICU patients
making predictions visible from a black
box model is essential to understanding
the severity of illness and to enable early
interventions for patients in ICU.
Lack of explainability is concerning when
ML is used in high-stake cases.
Oliveira et al. (2023) [518] Decision support system
Explainability is also needed to ensure
the legitimacy of AI use.
Using explainability techniques leads to
transparency, justifiability, and
Burgueno et al. (2023) [519] Land cover classification informativeness of the ML model, which
is necessary in applications where there
are critical aspects involved.
Appl. Sci. 2024, 14, 8884 83 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
Horst et al. (2023) [520] Human gait recognition specific application to identify the most
relevant characteristics used for
classification in clinical gait analysis.
The authors mentioned that the
explainability is important in their
Predictive analytics; case study specific application to understand how
Napoles et al. (2023) [521]
in diabetes an algorithm works and how it can help
analysts with the understanding of key
questions and needs of their organization.
The authors mentioned that providing
physical explanations for data-driven
models is essential. Further, they
Ni et al. (2023) [522] Hydrometeorology mentioned that it is important to
understand the inner workings of the
deep learning model and provide insights
into what the network has learned.
The authors mentioned that the
explainability is important to help
cybersecurity experts understand the
Amiri-Zarandi et al. (2023) [523] Threat detection in IoT reasons behind detected threats, improve
security monitoring practices, and
communicate with users about the
reasons for their investigation.
Explainability enables extracting
information about relationships between
features in data and/or in the model.
Huang et al. (2023) [250] Soil moisture prediction
Explainability increases trust in ML
models among users and
decision-makers.
The authors mentioned that the
explainability in their specific application
is important to understand how DL
Niu et al. (2022) [524] Diabetic retinopathy detection
models make predictions, to improve
trust, and to encourage collaboration
within the medical community.
Causality explanations help
Kliangkhlao et al. (2022) [525] Predicting demand and supply behavior decision-makers understand the reasons
behind models’ decisions.
The authors mentioned that traditional
AI models operate as black boxes, and in
critical domains like cancer therapy,
where trust, accountability, and
regulatory compliance are essential, the
lack of explainability in AI models is a
Cancer treatment; drug significant drawback. Further, they
Singha et al. (2023) [266]
response prediction mentioned using explainability can
provide clear, interpretable, and
human-understandable explanations for
the model’s actions and decisions, and it
improves trustworthiness and usability
and facilitates further research on
potential drug targets for cancer therapy.
Appl. Sci. 2024, 14, 8884 84 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that commonly
known explanations for stock-picking
processes are often too vague to be
applied in concrete cases. They also
Thrun (2022) [278] Stock market analysis mentioned that explainability is
important to provide specific criteria for
stock picking that are explainable and can
lead to above-average returns in the
stock market.
The authors mentioned that the
explainability in their specific application
is important to trust the predictions made
by the models in the medical domain.
And also, they mentioned that even if a
Dissanayake et al. (2021) [526] Heart anomaly detection
model performs with excellent accuracy,
understanding its behavior and
predictions is important for medical
experts and patients to trust the validity
of the system.
The authors mentioned that the
explainability in their specific application
of credit scoring is important to
regulatory requirements like the Basel
Accord, which mandates that lending
institutions must be able to explain to
loan applicants why their applications
were denied. Also, they mentioned
Dastile et al. (2021) [527] Credit scoring
explainability is important to gain trust in
model predictions, ensure no
discrimination occurs during the credit
assessment process, and meet the “right
to explanation” requirement under
regulations like the European Union
General Data Protection
Regulation (GDPR).
The authors mentioned that the
explainability in their specific application
is important to provide significant proof
that explainable AI is essential in the
context of healthcare applications like
COVID-19 diagnosis. Also, they
mentioned that using visualization
Khan et al. (2022) [528] COVID-19 classification
techniques like Grad-CAM helps
highlight the crucial regions in the input
images that influenced the deep learning
model’s predictions and enhances
understanding and trust in the
classification results for
COVID-19 detection.
The authors mentioned that
explainability is important in their
Moon et al. (2021) [529] Alzheimer’s disease specific application to provide insights
into the complex models used
for classification.
Appl. Sci. 2024, 14, 8884 85 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability is important in their
specific application to enhance the utility
Carrieri et al. (2021) [42] Skin microbiome composition and reliability of ML models in
microbiome research and to facilitate the
translation of research findings into
actionable insights.
The authors mentioned that the
explainability is important in their
specific application to understand model
Beker et al. (2023) [236] Volcanic deformation detection behavior, improve performance, validate
predictions, and determine the sensitivity
of the model in detecting subtle volcanic
deformations in the InSAR data.
The authors mentioned that
explainability is important in their
Kiefer et al. (2022) [530] Document classification specific application to align machine
learning systems with human goals,
contexts, concerns, and ways of working.
The authors mentioned that
explainability is important in their
Sokhansanj et al. (2022) [216] Inter Partes Review (IPR) predictions specific application to align machine
learning systems with human goals,
contexts, concerns, and ways of working.
The authors mentioned that the
explainability in their specific application
is important to understand the
Matuszelanski et al. (2022) [207] Customer churn prediction
limitations of the model and address
issues without sacrificing the
performance gain from black-box models.
The authors mentioned that the
explainability in their specific application
of face recognition is important because
of the widespread and controversial use
of facial recognition technology in
Franco et al. (2021) [531] Face recognition various contexts. Further, they mentioned
that making face recognition algorithms
more trustworthy through explainability,
fairness, and privacy can improve public
opinion and general acceptance of
these technologies.
The authors mentioned that the
explainability in their specific application
is important to improve transparency and
Empathy detection in a better understanding of how the model
Montiel-Vazquez et al. (2022) [532]
textual communication makes decisions, which is essential for
building trust in the system and for
potential applications in various fields
where empathy detection is valuable.
Appl. Sci. 2024, 14, 8884 86 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


Explaining the explanations with good
metrics is important when complex
models are approximated with
Mollas et al. (2023) [533] User-oriented/interpretable XAI understandable explanations.
Understandability of these explainability
metrics is important so the end user can
evaluate the outcomes.
Lack of explainability hinders
widespread use of black-box models that
could be effective and beneficial in
agriculture. Black-box models and their
Wei et al. (2022) [251] Detecting disease from fruit leaves
interpretability are important in
agriculture because of complex data and
the variety of plant species that are
of interest.
Explainability in recommendation
Samih et al. (2021) [224] Movie recommendations systems increases efficiency, transparency,
and user satisfaction.
The authors mentioned that the
explainability is important because the
linguistic relationship between the input
and output variables of each fuzzy rule is
explainable, and providing human
Juang et al. (2021) [534] Hand palm tracking explainable fuzzy features and inference
models can improve the interpretability
of the tracking method. Further, they
mentioned that visualization of fuzzy
features can give a clear understanding of
the decision-making process.
In the field of healthcare, explainability
and interpretability are needed to enable
Cicek et al. (2023) [535] Diagnosing nephrotoxicity
the use of black-box ML models for
diagnosing diseases.
The authors mentioned that
explainability helps in understanding
how the ML model makes predictions
and enabling the assessment of whether
Jung et al. (2023) [536] Medicinal plants classification the model’s learning intentions are
consistent in the context of classifying
similar medicinal plant species like
Cynanchum wilfordii and
Cynanchum auriculatum.
Explanations are needed to convince
De Magistris et al. (2022) [233] Detecting fake news people about the classification of
fake news.
The authors mentioned that the
explainability is important in their
Identification of variables associated with
specific application for identifying and
the risk of developing neutralizing
Rawal et al. (2023) [537] ranking variables associated with the risk
antidrug antibodies to factor VIII in
of developing neutralizing antidrug
hemophilia A patients
antibodies to Factor VIII in hemophilia
A patients.
Appl. Sci. 2024, 14, 8884 87 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that the
explainability in their specific application
is important to understand which words
Kumar et al. (2021) [220] Sarcasm detection in dialogues or features influence the model’s
decision-making process and how the
model identifies sarcasm in
conversational threads.
The authors mentioned the explainability
in their specific application is important
to understand the relationship between
the device structure and performance in
photonic inverse design. Further, they
mentioned that explainability is
Yeung et al. (2022) [538] Photonic device design important to reveal the
structure-performance relationships of
each device, highlight the features
contributing to the figure-of-merit (FOM),
and potentially optimize the devices
further by overcoming local minima in
the adjoint optimization process.
The authors mentioned that
explainability is important in their
specific application to enhance model
Naeem et al. (2022) [539] Malware detection in IoT devices
transparency, facilitate security analysis,
evaluate model performance, and
support continuous model improvement.
The authors mentioned that the black-box
nature of deep learning models hides the
understanding of the decision-making
process and makes it challenging for
humans to interpret the classifications.
Mey et al. (2022) [540] Machine fault diagnosis
They mentioned that using explainability
can make the classification process
transparent and provide insights into
why certain decisions were made by
the model.
The authors mentioned explainability can
provide transparency to ML models and
allow for a better understanding of how
the predictions were made. And this
Martinez et al. (2023) [541] Genomics and gene regulation transparency was essential in delivering a
high-scale annotation of archaeal
promoter sequences and ensuring the
reliability of the curated promoter
sequences generated by the model.
Appl. Sci. 2024, 14, 8884 88 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned explainability in
their specific application is important to
provide a cross-validation tool for
practitioners. Further, they mentioned
that highlighting the different patterns of
the ECG signal that are related to a
Nkengue et al. (2024) [542] COVID-19 detection
COVID-19/non-COVID-19 classification
helps practitioners to understand which
features of the signal are responsible for
the classification, and it helps in
decision-making and validation
of results.
The authors mentioned explainability in
their specific application is important to
Behrens et al. (2022) [543] Climate modeling
enhance the interpretability of convective
processes in climate models.
The authors mentioned explainability in
their specific application is important to
understand the correlations between
Fatahi et al. (2022) [544] Cement production
operational variables and energy
consumption factors in an industrial
vertical roller mill circuit.
The authors mentioned that by
incorporating physics-based relations
within the Neural Network Augmented
Physics (NNAP) model, they aimed to
provide interpretable explanations that
De Groote et al. (2022) [545] Mechatronic systems modeling align with physical laws. Further, they
mentioned that in this way, the
understanding of the system dynamics
with partially unknown interactions
leads to more reliable and
insightful predictions.
The authors mentioned explainability is
important in their specific application to
enhance transparency and allow users to
understand why a particular decision
Takalo-Mattila et al. (2022) [546] Steel quality prediction
was reached, to build trust in the model’s
predictions, to audit the decisions made
by the model, and to ensure compliance
with regulations and standards.
The authors mentioned explainability in
their specific application is important to
enhance the interpretability of the
Assessment of developmental status
Drobnic et al. (2023) [102] model’s predictions and to provide
in children
insights into the features that influence
the motor efficiency index (MEI)
assessment of children and adolescents.
Appl. Sci. 2024, 14, 8884 89 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that
explainability in their specific application
is important for building trust and
confidence in the model’s decisions and
to know why the system has made a
Saarela et al. (2022) [32] Skin cancer classification particular decision. Further, they
mentioned that using explanations for
the model’s decisions increases trust
among users and also potentially teaches
humans to make better decisions in skin
lesion classification.
The authors mentioned that in the field of
energy management, understanding
complex AI models can be challenging
because of the black box nature. And they
mentioned that using explainability can
explain the impact of input variables on
Jang et al. (2023) [547] Energy management
the model’s output. Further, they
mentioned that explainability is essential
for EMS managers to comprehend why
specific predictions are made, enabling
informed decision-making in energy
management processes.
The authors mentioned that the
explainability of their specific application
is important to improve the
interpretability of the deep learning
Aishwarya et al. (2022) [548] Diagnostic of common lung pathologies model’s outputs, which is essential for
medical professionals to trust and
understand the diagnostic results, and it
is leading to faster diagnosis and
early treatment.
The authors mentioned that
explainability is important in their
specific application to build trust,
Kaczmarek-Majer et al. (2022) [549] Mental health; bipolar disorder enhance understanding, improve
decision-making processes, and manage
uncertainty in the context of psychiatric
care and mental health diagnosis.
The authors mentioned that the
explainability in their specific application
is important to address the challenges of
interpreting heterogeneous data and to
Bae (2024) [550] Malware classification
provide reliable explanations for the
models used in cybersecurity
applications, especially in
malware detection.
Appl. Sci. 2024, 14, 8884 90 of 111

Table A1. Cont.

Authors & Year XAI Application Why Explainability Is Important


The authors mentioned that in medical
image classification, XAI is essential for
helping medical professionals
understand and interpret the decisions
made by AI systems, and this leads to
more informed decisions about patient
Alzheimer’s disease detection care and treatment plans. Also, the
Mahim et al. (2024) [109]
and classification authors mentioned that XAI is essential
for regulatory and ethical reasons, as
transparency in the decision-making
process of AI systems in medical
applications is required to ensure
consistency with medical standards
and regulations.
The authors mentioned that the
explainability in their specific application
Primary Biliary Cholangitis (PBC) is important to provide insights into the
Gerussi et al. (2022) [551]
risk prediction decision-making process of the ML model
and facilitate its application in precision
medicine and risk stratification for PBC.
The authors mentioned that the
explainability in their specific application
is important to increase the transparency
and interpretability of the
super-resolution process for clinical MRI
scans. Further, they mentioned that
Li et al. (2022) [552] MRI imaging
explainability is essential for
understanding the decision-making
process of the deep learning model and
ensuring that the generated
high-resolution images are clinically
relevant and accurate.
The authors mentioned that the
explainability in their specific application
is important to help clinicians better
understand and utilize important clinical
information buried in electronic health
Shang et al. (2021) [267] Clinical practice record (EHR) data. Also, they mentioned
that explainable illustrations of important
clinical findings are necessary to provide
comprehensive and convincing details for
better understanding and acceptance by
clinicians beyond their specialties.

References
1. Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018,
6, 52138–52160. [CrossRef]
2. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022,
55, 3503–3568. [CrossRef]
3. Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based
Syst. 2023, 263, 110273. [CrossRef]
4. Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; van Keulen, M.; Seifert, C. From anecdotal
evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv. 2023, 55, 295.
[CrossRef]
Appl. Sci. 2024, 14, 8884 91 of 111

5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins,
R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf.
Fusion 2020, 58, 82–115. [CrossRef]
6. Hu, Z.F.; Kuflik, T.; Mocanu, I.G.; Najafian, S.; Shulner Tal, A. Recent studies of xai-review. In Proceedings of the Adjunct 29th
ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 421–431.
7. Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A systematic review of explainable artificial intelligence in terms of different
application domains and tasks. Appl. Sci. 2022, 12, 1353. [CrossRef]
8. Saranya, A.; Subhashini, R. A systematic review of Explainable Artificial Intelligence models and applications: Recent develop-
ments and future trends. Decis. Anal. J. 2023, 7, 100230.
9. Schwalbe, G.; Finzel, B. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on
methods and concepts. Data Min. Knowl. Discov. 2024, 38, 3043–3101. [CrossRef]
10. Speith, T. A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference
on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2239–2250.
11. Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021,
76, 89–106. [CrossRef]
12. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and
meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [CrossRef]
13. Samek, W.; Montavon, G.; Vedaldi, A.; Hansen, L.K.; Müller, K.R. Explainable AI: Interpreting, Explaining and Visualizing Deep
Learning; Springer Nature: Berlin/Heidelberg, Germany, 2019; Volume 11700.
14. Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference
on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; Volume 70.
15. Yeh, C.K.; Kim, J.; Yen, I.E.H.; Ravikumar, P.K. Representer point selection for explaining deep neural networks. Adv. Neural Inf.
Process. Syst. 2018, 31.
16. Li, O.; Liu, H.; Chen, C.; Rudin, C. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that
Explains Its Predictions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, Orleans, LA, USA, 2–7
February 2018.
17. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and
the GDPR. Harv. J. Law Technol. 2017, 31, 841. [CrossRef]
18. Erhan, D.; Bengio, Y.; Courville, A.; Vincent, P. Visualizing higher-layer features of a deep network. Univ. Montr. 2009, 1341.
19. Towell, G.G.; Shavlik, J.W. Extracting refined rules from knowledge-based neural networks. Mach Learn 1993, 13, 71–101.
[CrossRef]
20. Castro, J.L.; Mantas, C.J.; Benitez, J.M. Interpretation of artificial neural networks by means of fuzzy rules. IEEE Trans. Neural
Netw. 2002, 13, 101–116. [CrossRef]
21. Mitra, S.; Hayashi, Y. Neuro-fuzzy rule generation: Survey in soft computing framework. IEEE Trans. Neural Netw. 2000, 11,
748–768. [CrossRef]
22. Fisher, A.; Rudin, C.; Dominici, F. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an
Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 2019, 20, 1–81.
23. Fong, R.C.; Vedaldi, A. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE
International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
24. Zintgraf, L.M.; Cohen, T.S.; Adel, T.; Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. In
Proceedings of the International Conference on Learning Representations, ICLR, Toulon, France, 24–26 April 2017; pp. 1–12.
25. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on
Computer Vision, Zurich, Switzerland, 6–12 September 2014.
26. Saarela, M.; Jauhiainen, S. Comparison of feature importance measures as explanations for classification models. SN Appl. Sci.
2021, 3, 272. [CrossRef]
27. Wojtas, M.; Chen, K. Feature Importance Ranking for Deep Learning. In Proceedings of the Advances in Neural Information
Processing Systems (NIPS 2020), Vancouver, BC, Canada, 6–12 December 2020; Volume 33, pp. 5105–5114.
28. Burkart, N.; Huber, M.F. A Survey on the Explainability of Supervised Machine Learning. J. Artif. Intell. Res. 2021, 70, 245–317.
[CrossRef]
29. Saarela, M. On the relation of causality-versus correlation-based feature selection on model fairness. In Proceedings of the 39th
ACM/SIGAPP Symposium on Applied Computing, Avila, Spain, 8–12 April 2024; pp. 56–64.
30. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box
models. ACM Comput. Surv. (CSUR) 2018, 51, 93. [CrossRef]
31. Molnar, C. Interpretable Machine Learning; Lulu. com: Morrisville, NC, USA, 2020.
32. Saarela, M.; Geogieva, L. Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model. Appl.
Sci. 2022, 12, 9545. [CrossRef]
33. Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019,
8, 832. [CrossRef]
Appl. Sci. 2024, 14, 8884 92 of 111

34. Wang, Y.; Zhang, T.; Guo, X.; Shen, Z. Gradient based Feature Attribution in Explainable AI: A Technical Review. arXiv 2024,
arXiv:2403.10415.
35. Saarela, M.; Kärkkäinen, T. Can we automate expert-based journal rankings? Analysis of the Finnish publication indicator. J. Inf.
2020, 14, 101008. [CrossRef]
36. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of
methods and applications. Proc. IEEE 2021, 109, 247–278. [CrossRef]
37. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.;
Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021,
88, 105906. [CrossRef]
38. Birkle, C.; Pendlebury, D.A.; Schnell, J.; Adams, J. Web of Science as a data source for research on scientific and scholarly activity.
Quant. Sci. Stud. 2020, 1, 363–376. [CrossRef]
39. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report,
EBSE-2007-01; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK, 2007.
40. Da’u, A.; Salim, N. Recommendation system based on deep learning methods: A systematic review and new directions. Artif.
Intell. Rev. 2020, 53, 2709–2748. [CrossRef]
41. Mridha, K.; Uddin, M.M.; Shin, J.; Khadka, S.; Mridha, M.F. An Interpretable Skin Cancer Classification Using Optimized
Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [CrossRef]
42. Carrieri, A.P.; Haiminen, N.; Maudsley-Barton, S.; Gardiner, L.J.; Murphy, B.; Mayes, A.E.; Paterson, S.; Grimshaw, S.; Winn, M.;
Shand, C.; et al. Explainable AI reveals changes in skin microbiome composition linked to phenotypic differences. Sci. Rep. 2021,
11, 4565. [CrossRef]
43. Maouche, I.; Terrissa, L.S.; Benmohammed, K.; Zerhouni, N. An Explainable AI Approach for Breast Cancer Metastasis Prediction
Based on Clinicopathological Data. IEEE Trans. Biomed. Eng. 2023, 70, 3321–3329. [CrossRef] [PubMed]
44. Yagin, B.; Yagin, F.H.; Colak, C.; Inceoglu, F.; Kadry, S.; Kim, J. Cancer Metastasis Prediction and Genomic Biomarker Identification
through Machine Learning and eXplainable Artificial Intelligence in Breast Cancer Research. Diagnostics 2023, 13, 3314. [CrossRef]
45. Kaplun, D.; Krasichkov, A.; Chetyrbok, P.; Oleinikov, N.; Garg, A.; Pannu, H.S. Cancer Cell Profiling Using Image Moments and
Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database.
Mathematics 2021, 9, 2616. [CrossRef]
46. Kwong, J.C.C.; Khondker, A.; Tran, C.; Evans, E.; Cozma, I.A.; Javidan, A.; Ali, A.; Jamal, M.; Short, T.; Papanikolaou, F.;
et al. Explainable artificial intelligence to predict the risk of side-specific extraprostatic extension in pre-prostatectomy patients.
Cuaj-Can. Urol. Assoc. J. 2022, 16, 213–221. [CrossRef] [PubMed]
47. Ramirez-Mena, A.; Andres-Leon, E.; Alvarez-Cubero, M.J.; Anguita-Ruiz, A.; Martinez-Gonzalez, L.J.; Alcala-Fdez, J. Explainable
artificial intelligence to predict and identify prostate cancer tissue by gene expression. Comput. Methods Programs Biomed. 2023,
240, 107719. [CrossRef]
48. Anjara, S.G.; Janik, A.; Dunford-Stenger, A.; Mc Kenzie, K.; Collazo-Lorduy, A.; Torrente, M.; Costabello, L.; Provencio, M.
Examining explainable clinical decision support systems with think aloud protocols. PLoS ONE 2023, 18, e0291443. [CrossRef]
49. Wani, N.A.; Kumar, R.; Bedi, J. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using
explainable artificial intelligence. Comput. Methods Programs Biomed. 2024, 243, 107879. [CrossRef]
50. Laios, A.; Kalampokis, E.; Mamalis, M.E.; Tarabanis, C.; Nugent, D.; Thangavelu, A.; Theophilou, G.; De Jong, D. RoBERTa-
Assisted Outcome Prediction in Ovarian Cancer Cytoreductive Surgery Using Operative Notes. Cancer Control. 2023, 30,
10732748231209892. [CrossRef]
51. Laios, A.; Kalampokis, E.; Johnson, R.; Munot, S.; Thangavelu, A.; Hutson, R.; Broadhead, T.; Theophilou, G.; Leach, C.; Nugent,
D.; et al. Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer.
Cancers 2022, 14, 3447. [CrossRef]
52. Ghnemat, R.; Alodibat, S.; Abu Al-Haija, Q. Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging
Classification. J. Imaging 2023, 9, 177. [CrossRef]
53. Lohaj, O.; Paralic, J.; Bednar, P.; Paralicova, Z.; Huba, M. Unraveling COVID-19 Dynamics via Machine Learning and XAI:
Investigating Variant Influence and Prognostic Classification. Mach. Learn. Knowl. Extr. 2023, 5, 1266–1281. [CrossRef]
54. Sarp, S.; Catak, F.O.; Kuzlu, M.; Cali, U.; Kusetogullari, H.; Zhao, Y.; Ates, G.; Guler, O. An XAI approach for COVID-19 detection
using transfer learning with X-ray images. Heliyon 2023, 9, e15137. [CrossRef]
55. Sargiani, V.; De Souza, A.A.; De Almeida, D.C.; Barcelos, T.S.; Munoz, R.; Da Silva, L.A. Supporting Clinical COVID-19 Diagnosis
with Routine Blood Tests Using Tree-Based Entropy Structured Self-Organizing Maps. Appl. Sci. 2022, 12, 5137. [CrossRef]
56. Zhang, X.; Han, L.; Sobeih, T.; Han, L.; Dempsey, N.; Lechareas, S.; Tridente, A.; Chen, H.; White, S.; Zhang, D. CXR-Net: A
Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia from Chest X-ray Images.
IEEE J. Biomed. Health Inform. 2023, 27, 980–991. [CrossRef]
57. Palatnik de Sousa, I.; Vellasco, M.M.B.R.; Costa da Silva, E. Explainable Artificial Intelligence for Bias Detection in COVID
CT-Scan Classifiers. Sensors 2021, 21, 5657. [CrossRef] [PubMed]
58. Nguyen, D.Q.; Vo, N.Q.; Nguyen, T.T.; Nguyen-An, K.; Nguyen, Q.H.; Tran, D.N.; Quan, T.T. BeCaked: An Explainable Artificial
Intelligence Model for COVID-19 Forecasting. Sci. Rep. 2022, 12, 7969. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 93 of 111

59. Guarrasi, V.; Soda, P. Multi-objective optimization determines when, which and how to fuse deep networks: An application to
predict COVID-19 outcomes. Comput. Biol. Med. 2023, 154, 106625. [CrossRef]
60. Alabdulhafith, M.; Saleh, H.; Elmannai, H.; Ali, Z.H.; El-Sappagh, S.; Hu, J.W.; El-Rashidy, N. A Clinical Decision Support System
for Edge/Cloud ICU Readmission Model Based on Particle Swarm Optimization, Ensemble Machine Learning, and Explainable
Artificial Intelligence. IEEE Access 2023, 11, 100604–100621. [CrossRef]
61. Henzel, J.; Tobiasz, J.; Kozielski, M.; Bach, M.; Foszner, P.; Gruca, A.; Kania, M.; Mika, J.; Papiez, A.; Werner, A.; et al. Screening
Support System Based on Patient Survey Data-Case Study on Classification of Initial, Locally Collected COVID-19 Data. Appl.
Sci. 2021, 11, 790. [CrossRef]
62. Delgado-Gallegos, J.L.; Aviles-Rodriguez, G.; Padilla-Rivas, G.R.; Cosio-Leon, M.d.l.A.; Franco-Villareal, H.; Nieto-Hipolito,
J.I.; Lopez, J.d.D.S.; Zuniga-Violante, E.; Islas, J.F.; Romo-Cardenas, G.S. Application of C5.0 Algorithm for the Assessment of
Perceived Stress in Healthcare Professionals Attending COVID-19. Brain Sci. 2023, 13, 513. [CrossRef] [PubMed]
63. Yigit, T.; Sengoz, N.; Ozmen, O.; Hemanth, J.; Isik, A.H. Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning. Trait. Signal 2022, 39, 863–869. [CrossRef]
64. Papandrianos, I.N.; Feleki, A.; Moustakidis, S.; Papageorgiou, I.E.; Apostolopoulos, I.D.; Apostolopoulos, D.J. An Explainable
Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM. Appl.
Sci. 2022, 12, 7592. [CrossRef]
65. Zhang, Y.; Weng, Y.; Lund, J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics 2022, 12, 237.
[CrossRef]
66. Rietberg, M.T.; Nguyen, V.B.; Geerdink, J.; Vijlbrief, O.; Seifert, C. Accurate and Reliable Classification of Unstructured Reports
on Their Diagnostic Goal Using BERT Models. Diagnostics 2023, 13, 1251. [CrossRef] [PubMed]
67. Ornek, A.H.; Ceylan, M. Explainable Artificial Intelligence (XAI): Classification of Medical Thermal Images of Neonates Using
Class Activation Maps. Trait. Signal 2021, 38, 1271–1279. [CrossRef]
68. Dindorf, C.; Konradi, J.; Wolf, C.; Taetz, B.; Bleser, G.; Huthwelker, J.; Werthmann, F.; Bartaguiz, E.; Kniepert, J.; Drees, P.; et al.
Classification and Automated Interpretation of Spinal Posture Data Using a Pathology-Independent Classifier and Explainable
Artificial Intelligence (XAI). Sensors 2021, 21, 6323. [CrossRef]
69. Sarp, S.; Kuzlu, M.; Wilson, E.; Cali, U.; Guler, O. The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound
Classification. Electronics 2021, 10, 1406. [CrossRef]
70. Wang, M.H.; Chong, K.K.l.; Lin, Z.; Yu, X.; Pan, Y. An Explainable Artificial Intelligence-Based Robustness Optimization Approach
for Age-Related Macular Degeneration Detection Based on Medical IOT Systems. Electronics 2023, 12, 2697. [CrossRef]
71. Kalyakulina, A.; Yusipov, I.; Kondakova, E.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Small immunological
clocks identified by deep learning and gradient boosting. Front. Immunol. 2023, 14, 1177611. [CrossRef]
72. Javed, A.R.; Khan, H.U.; Alomari, M.K.B.; Sarwar, M.U.; Asim, M.; Almadhor, A.S.; Khan, M.Z. Toward explainable AI-
empowered cognitive health assessment. Front. Public Health 2023, 11, 1024195. [CrossRef]
73. Valladares-Rodriguez, S.; Fernandez-Iglesias, M.J.; Anido-Rifon, L.E.; Pacheco-Lorenzo, M. Evaluation of the Predictive Ability
and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment. Electronics 2022,
11, 3424. [CrossRef]
74. Moreno-Sanchez, P.A. Improvement of a prediction model for heart failure survival through explainable artificial intelligence.
Front. Cardiovasc. Med. 2023, 10, 1219586. [CrossRef]
75. Katsushika, S.; Kodera, S.; Sawano, S.; Shinohara, H.; Setoguchi, N.; Tanabe, K.; Higashikuni, Y.; Takeda, N.; Fujiu, K.; Daimon,
M.; et al. An explainable artificial intelligence-enabled electrocardiogram analysis model for the classification of reduced left
ventricular function. Eur. Heart J.-Digit. Health 2023, 4, 254–264. [CrossRef] [PubMed]
76. Kamal, M.S.; Dey, N.; Chowdhury, L.; Hasan, S.I.; Santosh, K.C. Explainable AI for Glaucoma Prediction Analysis to Understand
Risk Factors in Treatment Planning. IEEE Trans. Instrum. Meas. 2022, 71, 2509209. [CrossRef]
77. Deperlioglu, O.; Kose, U.; Gupta, D.; Khanna, A.; Giampaolo, F.; Fortino, G. Explainable framework for Glaucoma diagnosis by
image processing and convolutional neural network synergy: Analysis with doctor evaluation. Future Gener. Comput.-Syst.- Int. J.
Escience 2022, 129, 152–169. [CrossRef]
78. Kim, Y.K.; Koo, J.H.; Lee, S.J.; Song, H.S.; Lee, M. Explainable Artificial Intelligence Warning Model Using an Ensemble Approach
for In-Hospital Cardiac Arrest Prediction: Retrospective Cohort Study. J. Med. Internet Res. 2023, 25, e48244. [CrossRef]
79. Obayya, M.; Nemri, N.; Nour, M.K.; Al Duhayyim, M.; Mohsen, H.; Rizwanullah, M.; Zamani, A.S.; Motwakel, A. Explainable
Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification. Appl. Sci. 2022, 12, 8749.
[CrossRef]
80. Ganguly, R.; Singh, D. Explainable Artificial Intelligence (XAI) for the Prediction of Diabetes Management: An Ensemble
Approach. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 158–163. [CrossRef]
81. Hendawi, R.; Li, J.; Roy, S. A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes
Predictions: Survey-Based User Study. JMIR Form. Res. 2023, 7, e50328. [CrossRef]
82. Maaroof, N.; Moreno, A.; Valls, A.; Jabreel, M.; Romero-Aroca, P. Multi-Class Fuzzy-LORE: A Method for Extracting Local and
Counterfactual Explanations Using Fuzzy Decision Trees. Electronics 2023, 12, 2215. [CrossRef]
83. Raza, A.; Tran, K.P.; Koehl, L.; Li, S. Designing ECG monitoring healthcare system with federated transfer learning and explainable
AI. Knowl.-Based Syst. 2022, 236, 107763. [CrossRef]
Appl. Sci. 2024, 14, 8884 94 of 111

84. Singh, P.; Sharma, A. Interpretation and Classification of Arrhythmia Using Deep Convolutional Network. IEEE Trans. Instrum.
Meas. 2022, 71, 2518512. [CrossRef]
85. Mollaei, N.; Fujao, C.; Silva, L.; Rodrigues, J.; Cepeda, C.; Gamboa, H. Human-Centered Explainable Artificial Intelligence:
Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms. Int. J. Environ. Res. Public Health
2022, 19, 9552. [CrossRef] [PubMed]
86. Petrauskas, V.; Jasinevicius, R.; Damuleviciene, G.; Liutkevicius, A.; Janaviciute, A.; Lesauskaite, V.; Knasiene, J.; Meskauskas,
Z.; Dovydaitis, J.; Kazanavicius, V.; et al. Explainable Artificial Intelligence-Based Decision Support System for Assessing the
Nutrition-Related Geriatric Syndromes. Appl. Sci. 2021, 11, 1763. [CrossRef]
87. George, R.; Ellis, B.; West, A.; Graff, A.; Weaver, S.; Abramowski, M.; Brown, K.; Kerr, L.; Lu, S.C.; Swisher, C.; et al. Ensuring fair,
safe, and interpretable artificial intelligence-based prediction tools in a real-world oncological setting. Commun. Med. 2023, 3, 88.
[CrossRef] [PubMed]
88. Ivanovic, M.; Autexier, S.; Kokkonidis, M.; Rust, J. Quality medical data management within an open AI architecture-cancer
patients case. Connect. Sci. 2023, 35, 2194581. [CrossRef]
89. Zhang, H.; Ogasawara, K. Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing. Bioengineering
2023, 10, 1070. [CrossRef]
90. Zlahtic, B.; Zavrsnik, J.; Vosner, H.B.; Kokol, P.; Suran, D.; Zavrsnik, T. Agile Machine Learning Model Development Using Data
Canyons in Medicine: A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement. Appl.
Sci. 2023, 13, 8329. [CrossRef]
91. Gouverneur, P.; Li, F.; Shirahama, K.; Luebke, L.; Adamczyk, W.M.; Szikszay, T.M.M.; Luedtke, K.; Grzegorzek, M. Explainable
Artificial Intelligence (XAI) in Pain Research: Understanding the Role of Electrodermal Activity for Automated Pain Recognition.
Sensors 2023, 23, 1959. [CrossRef]
92. Real, K.S.D.; Rubio, A. Discovering the mechanism of action of drugs with a sparse explainable network. Ebiomedicine 2023, 95,
104767. [CrossRef]
93. Park, A.; Lee, Y.; Nam, S. A performance evaluation of drug response prediction models for individual drugs. Sci. Rep. 2023,
13, 11911. [CrossRef] [PubMed]
94. Li, D.; Liu, Y.; Huang, J.; Wang, Z. A Trustworthy View on Explainable Artificial Intelligence Method Evaluation. Computer 2023,
56, 50–60. [CrossRef]
95. Chen, T.C.T.; Chiu, M.C. Evaluating the sustainability of smart technology applications in healthcare after the COVID-19
pandemic: A hybridising subjective and objective fuzzy group decision-making approach with explainable artificial intelligence.
Digit. Health 2022, 8, 20552076221136381. [CrossRef]
96. Bhatia, S.; Albarrak, A.S. A Blockchain-Driven Food Supply Chain Management Using QR Code and XAI-Faster RCNN
Architecture. Sustainability 2023, 15, 2579. [CrossRef]
97. Konradi, J.; Zajber, M.; Betz, U.; Drees, P.; Gerken, A.; Meine, H. AI-Based Detection of Aspiration for Video-Endoscopy with
Visual Aids in Meaningful Frames to Interpret the Model Outcome. Sensors 2022, 22, 9468. [CrossRef]
98. Aquino, G.; Costa, M.G.F.; Costa Filho, C.F.F. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models
in Human Activity Recognition Tasks. Sensors 2023, 23, 4409. [CrossRef]
99. Vijayvargiya, A.; Singh, P.; Kumar, R.; Dey, N. Hardware Implementation for Lower Limb Surface EMG Measurement and
Analysis Using Explainable AI for Activity Recognition. IEEE Trans. Instrum. Meas. 2022, 71, 2004909. [CrossRef]
100. Iliadou, E.; Su, Q.; Kikidis, D.; Bibas, T.; Kloukinas, C. Profiling hearing aid users through big data explainable artificial
intelligence techniques. Front. Neurol. 2022, 13, 933940. [CrossRef]
101. Wang, X.; Qiao, Y.; Cui, Y.; Ren, H.; Zhao, Y.; Linghu, L.; Ren, J.; Zhao, Z.; Chen, L.; Qiu, L. An explainable artificial intelligence
framework for risk prediction of COPD in smokers. BMC Public Health 2023, 23, 2164. [CrossRef] [PubMed]
102. Drobnic, F.; Starc, G.; Jurak, G.; Kos, A.; Pustisek, M. Explained Learning and Hyperparameter Optimization of Ensemble
Estimator on the Bio-Psycho-Social Features of Children and Adolescents. Electronics 2023, 12, 4097. [CrossRef]
103. Jeong, T.; Park, U.; Kang, S.W. Novel quantitative electroencephalogram feature image adapted for deep learning: Verification
through classification of Alzheimer’s disease dementia. Front. Neurosci. 2022, 16, 1033379. [CrossRef] [PubMed]
104. Varghese, A.; George, B.; Sherimon, V.; Al Shuaily, H.S. Enhancing Trust in Alzheimer’s Disease Classification using Explainable
Artificial Intelligence: Incorporating Local Post Hoc Explanations for a Glass-box Model. Bahrain Med. Bull. 2023, 45, 1471–1478.
105. Amoroso, N.; Quarto, S.; La Rocca, M.; Tangaro, S.; Monaco, A.; Bellotti, R. An eXplainability Artificial Intelligence approach to
brain connectivity in Alzheimer’s disease. Front. Aging Neurosci. 2023, 15, 1238065. [CrossRef] [PubMed]
106. Kamal, M.S.; Northcote, A.; Chowdhury, L.; Dey, N.; Gonzalez Crespo, R.; Herrera-Viedma, E. Alzheimer’s Patient Analysis
Using Image and Gene Expression Data and Explainable-AI to Present Associated Genes. IEEE Trans. Instrum. Meas. 2021,
70, 2513107. [CrossRef]
107. Hernandez, M.; Ramon-Julvez, U.; Ferraz, F.; Consortium, A. Explainable AI toward understanding the performance of the top
three TADPOLE Challenge methods in the forecast of Alzheimer’s disease diagnosis. PLoS ONE 2022, 17, e0264695. [CrossRef]
[PubMed]
108. El-Sappagh, S.; Alonso, J.M.; Islam, S.M.R.; Sultan, A.M.; Kwak, K.S. A multilayer multimodal detection and prediction model
based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 2021, 11, 2660. [CrossRef]
Appl. Sci. 2024, 14, 8884 95 of 111

109. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.A.; Shareef, B.; Ahsan, M.M.; Islam, M.K.; Miah, M.S.;
et al. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model.
IEEE Access 2024, 12, 8390–8412. [CrossRef]
110. Bhandari, N.; Walambe, R.; Kotecha, K.; Kaliya, M. Integrative gene expression analysis for the diagnosis of Parkinson’s disease
using machine learning and explainable AI. Comput. Biol. Med. 2023, 163, 107140. [CrossRef] [PubMed]
111. Kalyakulina, A.; Yusipov, I.; Bacalini, M.G.; Franceschi, C.; Vedunova, M.; Ivanchenko, M. Disease classification for whole-blood
DNA methylation: Meta-analysis, missing values imputation, and XAI. Gigascience 2022, 11, giac097. [CrossRef] [PubMed]
112. McFall, G.P.; Bohn, L.; Gee, M.; Drouin, S.M.; Fah, H.; Han, W.; Li, L.; Camicioli, R.; Dixon, R.A. Identifying key multi-modal
predictors of incipient dementia in Parkinson’s disease: A machine learning analysis and Tree SHAP interpretation. Front. Aging
Neurosci. 2023, 15, 1124232. [CrossRef]
113. Pianpanit, T.; Lolak, S.; Sawangjai, P.; Sudhawiyangkul, T.; Wilaiprasitporn, T. Parkinson’s Disease Recognition Using SPECT
Image and Interpretable AI: A Tutorial. IEEE Sens. J. 2021, 21, 22304–22316. [CrossRef]
114. Kumar, A.; Manikandan, R.; Kose, U.; Gupta, D.; Satapathy, S.C. Doctor’s Dilemma: Evaluating an Explainable Subtractive
Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis. Acm Trans. Multimed. Comput. Commun. Appl.
2021, 17, 105. [CrossRef]
115. Gaur, L.; Bhandari, M.; Razdan, T.; Mallik, S.; Zhao, Z. Explanation-Driven Deep Learning Model for Prediction of Brain Tumour
Status Using MRI Image Data. Front. Genet. 2022, 13, 822666. [CrossRef]
116. Tasci, B. Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet. Diagnostics 2023, 13, 859.
[CrossRef] [PubMed]
117. Esmaeili, M.; Vettukattil, R.; Banitalebi, H.; Krogh, N.R.; Geitung, J.T. Explainable Artificial Intelligence for Human-Machine
Interaction in Brain Tumor Localization. J. Pers. Med. 2021, 11, 1213. [CrossRef] [PubMed]
118. Maqsood, S.; Damasevicius, R.; Maskeliunas, R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass
SVM. Medicina 2022, 58, 1090. [CrossRef]
119. Solorio-Ramirez, J.L.; Saldana-Perez, M.; Lytras, M.D.; Moreno-Ibarra, M.A.; Yanez-Marquez, C. Brain Hemorrhage Classification
in CT Scan Images Using Minimalist Machine Learning. Diagnostics 2021, 11, 1449. [CrossRef]
120. Andreu-Perez, J.; Emberson, L.L.; Kiani, M.; Filippetti, M.L.; Hagras, H.; Rigato, S. Explainable artificial intelligence based
analysis for interpreting infant fNIRS data in developmental cognitive neuroscience. Commun. Biol. 2021, 4, 1077. [CrossRef]
121. Hilal, A.M.; Issaoui, I.; Obayya, M.; Al-Wesabi, F.N.; Nemri, N.; Hamza, M.A.; Al Duhayyim, M.; Zamani, A.S. Modeling of
Explainable Artificial Intelligence for Biomedical Mental Disorder Diagnosis. CMC-Comput. Mater. Contin. 2022, 71, 3853–3867.
[CrossRef]
122. Vieira, J.C.; Guedes, L.A.; Santos, M.R.; Sanchez-Gendriz, I.; He, F.; Wei, H.L.; Guo, Y.; Zhao, Y. Using Explainable Artificial
Intelligence to Obtain Efficient Seizure-Detection Models Based on Electroencephalography Signals. Sensors 2023, 23, 9871.
[CrossRef]
123. Al-Hussaini, I.; Mitchell, C.S. SeizFt: Interpretable Machine Learning for Seizure Detection Using Wearables. Bioengineering 2023,
10, 918. [CrossRef]
124. Li, Z.; Li, R.; Zhou, Y.; Rasmy, L.; Zhi, D.; Zhu, P.; Dono, A.; Jiang, X.; Xu, H.; Esquenazi, Y.; et al. Prediction of Brain Metastases
Development in Patients with Lung Cancer by Explainable Artificial Intelligence from Electronic Health Records. JCO Clin.
Cancer Inform. 2023, 7, e2200141. [CrossRef] [PubMed]
125. Azam, H.; Tariq, H.; Shehzad, D.; Akbar, S.; Shah, H.; Khan, Z.A. Fully Automated Skull Stripping from Brain Magnetic
Resonance Images Using Mask RCNN-Based Deep Learning Neural Networks. Brain Sci. 2023, 13, 1255. [CrossRef] [PubMed]
126. Sasahara, K.; Shibata, M.; Sasabe, H.; Suzuki, T.; Takeuchi, K.; Umehara, K.; Kashiyama, E. Feature importance of machine
learning prediction models shows structurally active part and important physicochemical features in drug design. Drug Metab.
Pharmacokinet. 2021, 39, 100401. [CrossRef]
127. Wang, Q.; Huang, K.; Chandak, P.; Zitnik, M.; Gehlenborg, N. Extending the Nested Model for User-Centric XAI: A Design Study
on GNN-based Drug Repurposing. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1266–1276. [CrossRef]
128. Castiglione, F.; Nardini, C.; Onofri, E.; Pedicini, M.; Tieri, P. Explainable Drug Repurposing Approach from Biased Random
Walks. IEEE-Acm Trans. Comput. Biol. Bioinform. 2023, 20, 1009–1019. [CrossRef]
129. Jena, R.; Pradhan, B.; Gite, S.; Alamri, A.; Park, H.J. A new method to promptly evaluate spatial earthquake probability mapping
using an explainable artificial intelligence (XAI) model. Gondwana Res. 2023, 123, 54–67. [CrossRef]
130. Jena, R.; Shanableh, A.; Al-Ruzouq, R.; Pradhan, B.; Gibril, M.B.A.; Khalil, M.A.; Ghorbanzadeh, O.; Ganapathy, G.P.; Ghamisi, P.
Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula. Remote. Sens.
2023, 15, 2248. [CrossRef]
131. Alshehri, F.; Rahman, A. Coupling Machine and Deep Learning with Explainable Artificial Intelligence for Improving Prediction
of Groundwater Quality and Decision-Making in Arid Region, Saudi Arabia. Water 2023, 15, 2298. [CrossRef]
132. Clare, M.C.A.; Sonnewald, M.; Lguensat, R.; Deshayes, J.; Balaji, V. Explainable Artificial Intelligence for Bayesian Neural
Networks: Toward Trustworthy Predictions of Ocean Dynamics. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003162. [CrossRef]
133. Nunez, J.; Cortes, C.B.; Yanez, M.A. Explainable Artificial Intelligence in Hydrology: Interpreting Black-Box Snowmelt-Driven
Streamflow Predictions in an Arid Andean Basin of North-Central Chile. Water 2023, 15, 3369. [CrossRef]
Appl. Sci. 2024, 14, 8884 96 of 111

134. Kolevatova, A.; Riegler, M.A.; Cherubini, F.; Hu, X.; Hammer, H.L. Unraveling the Impact of Land Cover Changes on Climate
Using Machine Learning and Explainable Artificial Intelligence. Big Data Cogn. Comput. 2021, 5, 55. [CrossRef]
135. Xue, P.; Wagh, A.; Ma, G.; Wang, Y.; Yang, Y.; Liu, T.; Huang, C. Integrating Deep Learning and Hydrodynamic Modeling to
Improve the Great Lakes Forecast. Remote. Sens. 2022, 14, 2640. [CrossRef]
136. Huang, F.; Zhang, Y.; Zhang, Y.; Nourani, V.; Li, Q.; Li, L.; Shangguan, W. Towards interpreting machine learning models for
predicting soil moisture droughts. Environ. Res. Lett. 2023, 18, 074002. [CrossRef]
137. Huynh, T.M.T.; Ni, C.F.; Su, Y.S.; Nguyen, V.C.N.; Lee, I.H.; Lin, C.P.; Nguyen, H.H. Predicting Heavy Metal Concentrations in
Shallow Aquifer Systems Based on Low-Cost Physiochemical Parameters Using Machine Learning Techniques. Int. J. Environ.
Res. Public Health 2022, 19, 12180. [CrossRef] [PubMed]
138. Bandstra, M.S.; Curtis, J.C.; Ghawaly, J.M., Jr.; Jones, A.C.; Joshi, T.H.Y. Explaining machine-learning models for gamma-ray
detection and identification. PLoS ONE 2023, 18, e0286829. [CrossRef]
139. Andresini, G.; Appice, A.; Malerba, D. SILVIA: An eXplainable Framework to Map Bark Beetle Infestation in Sentinel-2 Images.
IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 10050–10066. [CrossRef]
140. van Stein, B.; Raponi, E.; Sadeghi, Z.; Bouman, N.; van Ham, R.; Back, T. A Comparison of Global Sensitivity Analysis Methods
for Explainable AI with an Application in Genomic Prediction. IEEE Access 2022, 10, 103364–103381. [CrossRef]
141. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Thai-Nghe, N.; Nguyen, T.G. Explainable Deep Learning Models with Gradient-Weighted
Class Activation Mapping for Smart Agriculture. IEEE Access 2023, 11, 83752–83762. [CrossRef]
142. Lysov, M.; Pukhkiy, K.; Vasiliev, E.; Getmanskaya, A.; Turlapov, V. Ensuring Explainability and Dimensionality Reduction in a
Multidimensional HSI World for Early XAI-Diagnostics of Plant Stress. Entropy 2023, 25, 801. [CrossRef]
143. Iatrou, M.; Karydas, C.; Tseni, X.; Mourelatos, S. Representation Learning with a Variational Autoencoder for Predicting Nitrogen
Requirement in Rice. Remote. Sens. 2022, 14, 5978. [CrossRef]
144. Zinonos, Z.; Gkelios, S.; Khalifeh, A.F.; Hadjimitsis, D.G.; Boutalis, Y.S.; Chatzichristofis, S.A. Grape Leaf Diseases Identification
System Using Convolutional Neural Networks and LoRa Technology. IEEE Access 2022, 10, 122–133. [CrossRef]
145. Danilevicz, M.F.; Gill, M.; Fernandez, C.G.T.; Petereit, J.; Upadhyaya, S.R.; Batley, J.; Bennamoun, M.; Edwards, D.; Bayer, P.E.
DNABERT-based explainable lncRNA identification in plant genome assemblies. Comput. Struct. Biotechnol. J. 2023, 21, 5676–5685.
[CrossRef]
146. Kim, M.; Kim, D.; Jin, D.; Kim, G. Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case
Study of Seoul Metropolitan Area, Korea. Land 2023, 12, 420. [CrossRef]
147. Galli, A.; Piscitelli, M.S.; Moscato, V.; Capozzoli, A. Bridging the gap between complexity and interpretability of a dataanalytics-
based process for benchmarking energy performance of buildings. Expert Syst. Appl. 2022, 206, 117649. [CrossRef]
148. Nguyen, D.D.; Tanveer, M.; Mai, H.N.; Pham, T.Q.D.; Khan, H.; Park, C.W.; Kim, G.M. Guiding the optimization of membraneless
microfluidic fuel cells via explainable artificial intelligence: Comparative analyses of multiple machine learning models and
investigation of key operating parameters. Fuel 2023, 349, 128742. [CrossRef]
149. Pandey, D.S.; Raza, H.; Bhattacharyya, S. Development of explainable AI-based predictive models for bubbling fluidised bed
gasification process. Fuel 2023, 351, 128971. [CrossRef]
150. Wongburi, P.; Park, J.K. Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network.
Sustainability 2022, 14, 6276. [CrossRef]
151. Aslam, N.; Khan, I.U.; Alansari, A.; Alrammah, M.; Alghwairy, A.; Alqahtani, R.; Alqahtani, R.; Almushikes, M.; Hashim, M.A.L.
Anomaly Detection Using Explainable Random Forest for the Prediction of Undesirable Events in Oil Wells. Appl. Comput. Intell.
Soft Comput. 2022, 2022, 1558381. [CrossRef]
152. Mardian, J.; Champagne, C.; Bonsal, B.; Berg, A. Understanding the Drivers of Drought Onset and Intensification in the Canadian
Prairies: Insights from Explainable Artificial Intelligence (XAI). J. Hydrometeorol. 2023, 24, 2035–2055. [CrossRef]
153. Youness, G.; Aalah, A. An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction. Aerospace 2023,
10, 474. [CrossRef]
154. Chowdhury, D.; Sinha, A.; Das, D. XAI-3DP: Diagnosis and Understanding Faults of 3-D Printer with Explainable Ensemble AI.
IEEE Sens. Lett. 2023, 7, 6000104. [CrossRef]
155. Chelgani, S.C.; Nasiri, H.; Tohry, A.; Heidari, H.R. Modeling industrial hydrocyclone operational variables by SHAP-CatBoost-A
“conscious lab” approach. Powder Technol. 2023, 420, 118416. [CrossRef]
156. Elkhawaga, G.; Abu-Elkheir, M.; Reichert, M. Explainability of Predictive Process Monitoring Results: Can You See My Data
Issues? Appl. Sci. 2022, 12, 8192. [CrossRef]
157. El-khawaga, G.; Abu-Elkheir, M.; Reichert, M. XAI in the Context of Predictive Process Monitoring: An Empirical Analysis
Framework. Algorithms 2022, 15, 199. [CrossRef]
158. Hanchate, A.; Bukkapatnam, S.T.S.; Lee, K.H.; Srivastava, A.; Kumara, S. Reprint of: Explainable AI (XAI)-driven vibration
sensing scheme for surface quality monitoring in a smart surface grinding process. J. Manuf. Process. 2023, 100, 64–74. [CrossRef]
159. Alfeo, A.L.L.; Cimino, M.G.C.A.; Vaglini, G. Degradation stage classification via interpretable feature learning. J. Manuf. Syst.
2022, 62, 972–983. [CrossRef]
160. Akyol, S.; Das, M.; Alatas, B. Modeling the Energy Consumption of R600a Gas in a Refrigeration System with New Explainable
Artificial Intelligence Methods Based on Hybrid Optimization. Biomimetics 2023, 8, 397. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 97 of 111

161. Sharma, K.V.; Sai, P.H.V.S.T.; Sharma, P.; Kanti, P.K.; Bhramara, P.; Akilu, S. Prognostic modeling of polydisperse SiO2 /Aqueous
glycerol nanofluids’ thermophysical profile using an explainable artificial intelligence (XAI) approach. Eng. Appl. Artif. Intell.
2023, 126, 106967. [CrossRef]
162. Kulasooriya, W.K.V.J.B.; Ranasinghe, R.S.S.; Perera, U.S.; Thisovithan, P.; Ekanayake, I.U.; Meddage, D.P.P. Modeling strength
characteristics of basalt fiber reinforced concrete using multiple explainable machine learning with a graphical user interface. Sci.
Rep. 2023, 13, 13138. [CrossRef]
163. Geetha, G.K.; Sim, S.H. Fast identification of concrete cracks using 1D deep learning and explainable artificial intelligence-based
analysis. Autom. Constr. 2022, 143, 104572. [CrossRef]
164. Noh, Y.R.; Khalid, S.; Kim, H.S.; Choi, S.K. Intelligent Fault Diagnosis of Robotic Strain Wave Gear Reducer Using Area-Metric-
Based Sampling. Mathematics 2023, 11, 4081. [CrossRef]
165. Gim, J.; Lin, C.Y.; Turng, L.S. In-mold condition-centered and explainable artificial intelligence-based (IMC-XAI) process
optimization for injection molding. J. Manuf. Syst. 2024, 72, 196–213. [CrossRef]
166. Rozanec, J.M.; Trajkova, E.; Lu, J.; Sarantinoudis, N.; Arampatzis, G.; Eirinakis, P.; Mourtos, I.; Onat, M.K.; Yilmaz, D.A.; Kosmerlj,
A.; et al. Cyber-Physical LPG Debutanizer Distillation Columns: Machine-Learning-Based Soft Sensors for Product Quality
Monitoring. Appl. Sci. 2021, 11, 1790. [CrossRef]
167. Bobek, S.; Kuk, M.; Szelazek, M.; Nalepa, G.J. Enhancing Cluster Analysis with Explainable AI and Multidimensional Cluster
Prototypes. IEEE Access 2022, 10, 101556–101574. [CrossRef]
168. Chen, T.C.T.; Lin, C.W.; Lin, Y.C. A fuzzy collaborative forecasting approach based on XAI applications for cycle time range
estimation. Appl. Soft Comput. 2024, 151, 111122. [CrossRef]
169. Lee, Y.; Roh, Y. An Expandable Yield Prediction Framework Using Explainable Artificial Intelligence for Semiconductor
Manufacturing. Appl. Sci. 2023, 13, 2660. [CrossRef]
170. Alqaralleh, B.A.Y.; Aldhaban, F.; AlQarallehs, E.A.; Al-Omari, A.H. Optimal Machine Learning Enabled Intrusion Detection in
Cyber-Physical System Environment. CMC-Comput. Mater. Contin. 2022, 72, 4691–4707. [CrossRef]
171. Younisse, R.; Ahmad, A.; Abu Al-Haija, Q. Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley
Additive Explanations (SHAP). Big Data Cogn. Comput. 2022, 6, 126. [CrossRef]
172. Larriva-Novo, X.; Sanchez-Zas, C.; Villagra, V.A.; Marin-Lopez, A.; Berrocal, J. Leveraging Explainable Artificial Intelligence in
Real-Time Cyberattack Identification: Intrusion Detection System Approach. Appl. Sci. 2023, 13, 8587. [CrossRef]
173. Mahbooba, B.; Timilsina, M.; Sahal, R.; Serrano, M. Explainable Artificial Intelligence (XAI) to Enhance Trust Management in
Intrusion Detection Systems Using Decision Tree Model. Complexity 2021, 2021, 6634811. [CrossRef]
174. Ferretti, C.; Saletta, M. Do Neural Transformers Learn Human-Defined Concepts? An Extensive Study in Source Code Processing
Domain. Algorithms 2022, 15, 449. [CrossRef]
175. Rjoub, G.; Bentahar, J.; Wahab, O.A.; Mizouni, R.; Song, A.; Cohen, R.; Otrok, H.; Mourad, A. A Survey on Explainable Artificial
Intelligence for Cybersecurity. IEEE Trans. Netw. Serv. Manag. 2023, 20, 5115–5140. [CrossRef]
176. Kuppa, A.; Le-Khac, N.A. Adversarial XAI Methods in Cybersecurity. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4924–4938.
[CrossRef]
177. Jo, J.; Cho, J.; Moon, J. A Malware Detection and Extraction Method for the Related Information Using the ViT Attention
Mechanism on Android Operating System. Appl. Sci. 2023, 13, 6839. [CrossRef]
178. Lin, Y.S.; Liu, Z.Y.; Chen, Y.A.; Wang, Y.S.; Chang, Y.L.; Hsu, W.H. xCos: An Explainable Cosine Metric for Face Verification Task.
ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 112. [CrossRef]
179. Lim, S.Y.; Chae, D.K.; Lee, S.C. Detecting Deepfake Voice Using Explainable Deep Learning Techniques. Appl. Sci. 2022, 12, 3926.
[CrossRef]
180. Zhang, Z.; Umar, S.; Al Hammadi, A.Y.; Yoon, S.; Damiani, E.; Ardagna, C.A.; Bena, N.; Yeun, C.Y. Explainable Data Poison
Attacks on Human Emotion Evaluation Systems Based on EEG Signals. IEEE Access 2023, 11, 18134–18147. [CrossRef]
181. Muna, R.K.; Hossain, M.I.; Alam, M.G.R.; Hassan, M.M.; Ianni, M.; Fortino, G. Demystifying machine learning models of massive
IoT attack detection with Explainable AI for sustainable and secure future smart cities. Internet Things 2023, 24, 100919. [CrossRef]
182. Luo, R.; Xing, J.; Chen, L.; Pan, Z.; Cai, X.; Li, Z.; Wang, J.; Ford, A. Glassboxing Deep Learning to Enhance Aircraft Detection
from SAR Imagery. Remote. Sens. 2021, 13, 3650. [CrossRef]
183. Perez-Landa, G.I.; Loyola-Gonzalez, O.; Medina-Perez, M.A. An Explainable Artificial Intelligence Model for Detecting Xenopho-
bic Tweets. Appl. Sci. 2021, 11, 10801. [CrossRef]
184. Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems
(X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [CrossRef]
185. Manoharan, H.; Yuvaraja, T.; Kuppusamy, R.; Radhakrishnan, A. Implementation of explainable artificial intelligence in
commercial communication systems using micro systems. Sci. Prog. 2023, 106, 00368504231191657. [CrossRef] [PubMed]
186. Berger, T. Explainable artificial intelligence and economic panel data: A study on volatility spillover along the supply chains.
Financ. Res. Lett. 2023, 54, 103757. [CrossRef]
187. Raval, J.; Bhattacharya, P.; Jadav, N.K.; Tanwar, S.; Sharma, G.; Bokoro, P.N.; Elmorsy, M.; Tolba, A.; Raboaca, M.S. RaKShA: A
Trusted Explainable LSTM Model to Classify Fraud Patterns on Credit Card Transactions. Mathematics 2023, 11, 1901. [CrossRef]
Appl. Sci. 2024, 14, 8884 98 of 111

188. Martinez, M.A.M.; Nadj, M.; Langner, M.; Toreini, P.; Maedche, A. Does this Explanation Help? Designing Local Model-agnostic
Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology. ACM Trans. Interact. Intell. Syst.
2023, 13, 27. [CrossRef]
189. Martins, T.; de Almeida, A.M.; Cardoso, E.; Nunes, L. Explainable Artificial Intelligence (XAI): A Systematic Literature Review on
Taxonomies and Applications in Finance. IEEE Access 2024, 12, 618–629. [CrossRef]
190. Moscato, V.; Picariello, A.; Sperli, G. A benchmark of machine learning approaches for credit score prediction. Expert Syst. Appl.
2021, 165, 113986. [CrossRef]
191. Gramespacher, T.; Posth, J.A. Employing Explainable AI to Optimize the Return Target Function of a Loan Portfolio. Front. Artif.
Intell. 2021, 4, 693022. [CrossRef]
192. Gramegna, A.; Giudici, P. SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Front. Artif. Intell. 2021, 4,
752558. [CrossRef]
193. Rudin, C.; Shaposhnik, Y. Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to
Credit-Risk Evaluation. J. Mach. Learn. Res. 2023, 24, 1–44. [CrossRef]
194. Torky, M.; Gad, I.; Hassanien, A.E. Explainable AI Model for Recognizing Financial Crisis Roots Based on Pigeon Optimization
and Gradient Boosting Model. Int. J. Comput. Intell. Syst. 2023, 16, 50. [CrossRef]
195. Bermudez, L.; Anaya, D.; Belles-Sampera, J. Explainable AI for paid-up risk management in life insurance products. Financ. Res.
Lett. 2023, 57, 104242. [CrossRef]
196. Rozanec, J.; Trajkova, E.; Kenda, K.; Fortuna, B.; Mladenic, D. Explaining Bad Forecasts in Global Time Series Models. Appl. Sci.
2021, 11, 9243. [CrossRef]
197. Kim, H.S.; Joe, I. An XAI method for convolutional neural networks in self-driving cars. PLoS ONE 2022, 17, e0267282. [CrossRef]
198. Veitch, E.; Alsos, O.A. Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles. J. Mar. Sci.
Eng. 2021, 9, 227. [CrossRef]
199. Dworak, D.; Baranowski, J. Adaptation of Grad-CAM Method to Neural Network Architecture for LiDAR Pointcloud Object
Detection. Energies 2022, 15, 4681. [CrossRef]
200. Renda, A.; Ducange, P.; Marcelloni, F.; Sabella, D.; Filippou, M.C.; Nardini, G.; Stea, G.; Virdis, A.; Micheli, D.; Rapone, D.; et al.
Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking. Information
2022, 13, 395. [CrossRef]
201. Lorente, M.P.S.; Lopez, E.M.; Florez, L.A.; Espino, A.L.; Martinez, J.A.I.; de Miguel, A.S. Explaining Deep Learning-Based Driver
Models. Appl. Sci. 2021, 11, 3321. [CrossRef]
202. Qaffas, A.A.; Ben HajKacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. An Explainable Artificial Intelligence Approach for Multi-Criteria
ABC Item Classification. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 848–866. [CrossRef]
203. Yilmazer, R.; Birant, D. Shelf Auditing Based on Image Classification Using Semi-Supervised Deep Learning to Increase On-Shelf
Availability in Grocery Stores. Sensors 2021, 21, 327. [CrossRef] [PubMed]
204. Lee, J.; Jung, O.; Lee, Y.; Kim, O.; Park, C. A Comparison and Interpretation of Machine Learning Algorithm for the Prediction of
Online Purchase Conversion. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1472–1491. [CrossRef]
205. Okazaki, K.; Inoue, K. Explainable Model Fusion for Customer Journey Mapping. Front. Artif. Intell. 2022, 5, 824197. [CrossRef]
206. Diaz, G.M.; Galan, J.J.; Carrasco, R.A. XAI for Churn Prediction in B2B Models: A Use Case in an Enterprise Software Company.
Mathematics 2022, 10, 3896. [CrossRef]
207. Matuszelanski, K.; Kopczewska, K. Customer Churn in Retail E-Commerce Business: Spatial and Machine Learning Approach.
J. Theor. Appl. Electron. Commer. Res. 2022, 17, 165–198. [CrossRef]
208. Pereira, F.D.; Fonseca, S.C.; Oliveira, E.H.T.; Cristea, I.A.; Bellhauser, H.; Rodrigues, L.; Oliveira, D.B.F.; Isotani, S.; Carvalho,
L.S.G. Explaining Individual and Collective Programming Students’ Behavior by Interpreting a Black-Box Predictive Model.
IEEE Access 2021, 9, 117097–117119. [CrossRef]
209. Alcauter, I.; Martinez-Villasenor, L.; Ponce, H. Explaining Factors of Student Attrition at Higher Education. Comput. Sist. 2023,
27, 929–940. [CrossRef]
210. Gomez-Cravioto, D.A.; Diaz-Ramos, R.E.; Hernandez-Gress, N.; Luis Preciado, J.; Ceballos, H.G. Supervised machine learning
predictive analytics for alumni income. J. Big Data 2022, 9, 11. [CrossRef]
211. Saarela, M.; Heilala, V.; Jaaskela, P.; Rantakaulio, A.; Karkkainen, T. Explainable Student Agency Analytics. IEEE Access 2021,
9, 137444–137459. [CrossRef]
212. Ramon, Y.; Farrokhnia, R.A.; Matz, S.C.; Martens, D. Explainable AI for Psychological Profiling from Behavioral Data: An
Application to Big Five Personality Predictions from Financial Transaction Records. Information 2021, 12, 518. [CrossRef]
213. Zytek, A.; Liu, D.; Vaithianathan, R.; Veeramachaneni, K. Sibyl: Understanding and Addressing the Usability Challenges of
Machine Learning In High-Stakes Decision Making. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1161–1171. [CrossRef] [PubMed]
214. Rodriguez Oconitrillo, L.R.; Jose Vargas, J.; Camacho, A.; Burgos, A.; Manuel Corchado, J. RYEL: An Experimental Study in the
Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial
Intelligence and Case-Based Reasoning. Electronics 2021, 10, 1500. [CrossRef]
215. Escobar-Linero, E.; Garcia-Jimenez, M.; Trigo-Sanchez, M.E.; Cala-Carrillo, M.J.; Sevillano, J.L.; Dominguez-Morales, M. Using
machine learning-based systems to help predict disengagement from the legal proceedings by women victims of intimate partner
violence in Spain. PLoS ONE 2023, 18, e0276032. [CrossRef]
Appl. Sci. 2024, 14, 8884 99 of 111

216. Sokhansanj, B.A.; Rosen, G.L. Predicting Institution Outcomes for Inter Partes Review (IPR) Proceedings at the United States
Patent Trial & Appeal Board by Deep Learning of Patent Owner Preliminary Response Briefs. Appl. Sci. 2022, 12, 3656. [CrossRef]
217. Cha, Y.; Lee, Y. Advanced sentence-embedding method considering token importance based on explainable artificial intelligence
and text summarization model. Neurocomputing 2024, 564, 126987. [CrossRef]
218. Sevastjanova, R.; Jentner, W.; Sperrle, F.; Kehlbeck, R.; Bernard, J.; El-assady, M. QuestionComb: A Gamification Approach for
the Visual Explanation of Linguistic Phenomena through Interactive Labeling. ACM Trans. Interact. Intell. Syst. 2021, 11, 19.
[CrossRef]
219. Sovrano, F.; Vitali, F. Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces.
ACM Trans. Interact. Intell. Syst. 2022, 12, 26. [CrossRef]
220. Kumar, A.; Dikshit, S.; Albuquerque, V.H.C. Explainable Artificial Intelligence for Sarcasm Detection in Dialogues. Wirel.
Commun. Mob. Comput. 2021, 2021, 2939334. [CrossRef]
221. de Velasco, M.; Justo, R.; Zorrilla, A.L.; Torres, M.I. Analysis of Deep Learning-Based Decision-Making in an Emotional
Spontaneous Speech Task. Appl. Sci. 2023, 13, 980. [CrossRef]
222. Huang, J.; Wu, X.; Wen, J.; Huang, C.; Luo, M.; Liu, L.; Zheng, Y. Evaluating Familiarity Ratings of Domain Concepts with
Interpretable Machine Learning: A Comparative Study. Appl. Sci. 2023, 13, 2818. [CrossRef]
223. Shah, A.; Ranka, P.; Dedhia, U.; Prasad, S.; Muni, S.; Bhowmick, K. Detecting and Unmasking AI-Generated Texts through
Explainable Artificial Intelligence using Stylistic Features. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1043–1053. [CrossRef]
224. Samih, A.; Ghadi, A.; Fennan, A. ExMrec2vec: Explainable Movie Recommender System based on Word2vec. Int. J. Adv. Comput.
Sci. Appl. 2021, 12, 653–660. [CrossRef]
225. Pisoni, G.; Diaz-Rodriguez, N.; Gijlers, H.; Tonolli, L. Human-Centered Artificial Intelligence for Designing Accessible Cultural
Heritage. Appl. Sci. 2021, 11, 870. [CrossRef]
226. Mishra, S.; Shukla, A.K.; Muhuri, P.K. Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient
and Explainable Solution. Axioms 2022, 11, 489. [CrossRef]
227. Sullivan, R.S.; Longo, L. Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations. Mach. Learn.
Knowl. Extr. 2023, 5, 1433–1455. [CrossRef]
228. Tao, J.; Xiong, Y.; Zhao, S.; Wu, R.; Shen, X.; Lyu, T.; Fan, C.; Hu, Z.; Zhao, S.; Pan, G. Explainable AI for Cheating Detection and
Churn Prediction in Online Games. IEEE Trans. Games 2023, 15, 242–251. [CrossRef]
229. Szczepanski, M.; Pawlicki, M.; Kozik, R.; Choras, M. New explainability method for BERT-based model in fake news detection.
Sci. Rep. 2021, 11, 23705. [CrossRef]
230. Liang, X.S.; Straub, J. Deceptive Online Content Detection Using Only Message Characteristics and a Machine Learning Trained
Expert System. Sensors 2021, 21, 7083. [CrossRef]
231. Gowrisankar, B.; Thing, V.L.L. An adversarial attack approach for eXplainable AI evaluation on deepfake detection models.
Comput. Secur. 2024, 139, 103684. [CrossRef]
232. Damian, S.; Calvo, H.; Gelbukh, A. Fake News detection using n-grams for PAN@CLEF competition. J. Intell. Fuzzy Syst. 2022,
42, 4633–4640. [CrossRef]
233. De Magistris, G.; Russo, S.; Roma, P.; Starczewski, J.T.; Napoli, C. An Explainable Fake News Detector Based on Named Entity
Recognition and Stance Classification Applied to COVID-19. Information 2022, 13, 137. [CrossRef]
234. Joshi, G.; Srivastava, A.; Yagnik, B.; Hasan, M.; Saiyed, Z.; Gabralla, L.A.; Abraham, A.; Walambe, R.; Kotecha, K. Explainable
Misinformation Detection across Multiple Social Media Platforms. IEEE Access 2023, 11, 23634–23646. [CrossRef]
235. Heimerl, A.; Weitz, K.; Baur, T.; Andre, E. Unraveling ML Models of Emotion with NOVA: Multi-Level Explainable AI for
Non-Experts. IEEE Trans. Affect. Comput. 2022, 13, 1155–1167. [CrossRef]
236. Beker, T.; Ansari, H.; Montazeri, S.; Song, Q.; Zhu, X.X. Deep Learning for Subtle Volcanic Deformation Detection with InSAR
Data in Central Volcanic Zone. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5218520. [CrossRef]
237. Khan, M.A.; Park, H.; Lombardi, M. Exploring Explainable Artificial Intelligence Techniques for Interpretable Neural Networks
in Traffic Sign Recognition Systems. Electronics 2024, 13, 306. [CrossRef]
238. Resendiz, J.L.D.; Ponomaryov, V.; Reyes, R.R.; Sadovnychiy, S. Explainable CAD System for Classification of Acute Lymphoblastic
Leukemia Based on a Robust White Blood Cell Segmentation. Cancers 2023, 15, 3376. [CrossRef]
239. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local
explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [CrossRef]
240. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International
Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777.
241. Bello, M.; Napoles, G.; Concepcion, L.; Bello, R.; Mesejo, P.; Cordon, O. REPROT: Explaining the predictions of complex deep
learning architectures for object detection through reducts of an image. Inf. Sci. 2024, 654, 119851. [CrossRef]
242. Fouladgar, N.; Alirezaie, M.; Framling, K. Metrics and Evaluations of Time Series Explanations: An Application in Affect
Computing. IEEE Access 2022, 10, 23995–24009. [CrossRef]
243. Arrotta, L.; Civitarese, G.; Bettini, C. DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments.
Proc. Acm Interact. Mob. Wearable Ubiquitous-Technol.-Imwut 2022, 6, 1. [CrossRef]
244. Astolfi, D.; De Caro, F.; Vaccaro, A. Condition Monitoring of Wind Turbine Systems by Explainable Artificial Intelligence
Techniques. Sensors 2023, 23, 5376. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 100 of 111

245. Jean-Quartier, C.; Bein, K.; Hejny, L.; Hofer, E.; Holzinger, A.; Jeanquartier, F. The Cost of Understanding-XAI Algorithms towards
Sustainable ML in the View of Computational Cost. Computation 2023, 11, 92. [CrossRef]
246. Stassin, S.; Corduant, V.; Mahmoudi, S.A.; Siebert, X. Explainability and Evaluation of Vision Transformers: An In-Depth
Experimental Study. Electronics 2024, 13, 175. [CrossRef]
247. Quach, L.D.; Quoc, K.N.; Quynh, A.N.; Ngoc, H.T.; Thai-Nghe, N. Tomato Health Monitoring System: Tomato Classification,
Detection, and Counting System Based on YOLOv8 Model with Explainable MobileNet Models Using Grad-CAM plus. IEEE
Access 2024, 12, 9719–9737. [CrossRef]
248. Varam, D.; Mitra, R.; Mkadmi, M.; Riyas, R.A.; Abuhani, D.A.; Dhou, S.; Alzaatreh, A. Wireless Capsule Endoscopy Image
Classification: An Explainable AI Approach. IEEE Access 2023, 11, 105262–105280. [CrossRef]
249. Bhambra, P.; Joachimi, B.; Lahav, O. Explaining deep learning of galaxy morphology with saliency mapping. Mon. Not. R. Astron.
Soc. 2022, 511, 5032–5041. [CrossRef]
250. Huang, F.; Zhang, Y.; Zhang, Y.; Wei, S.; Li, Q.; Li, L.; Jiang, S. Interpreting Conv-LSTM for Spatio-Temporal Soil Moisture
Prediction in China. Agriculture 2023, 13, 971. [CrossRef]
251. Wei, K.; Chen, B.; Zhang, J.; Fan, S.; Wu, K.; Liu, G.; Chen, D. Explainable Deep Learning Study for Leaf Disease Classification.
Agronomy 2022, 12, 1035. [CrossRef]
252. Jin, W.; Li, X.; Fatehi, M.; Hamarneh, G. Generating post-hoc explanation from deep neural networks for multi-modal medical
image analysis tasks. Methodsx 2023, 10, 102009. [CrossRef]
253. Song, Z.; Trozzi, F.; Tian, H.; Yin, C.; Tao, P. Mechanistic Insights into Enzyme Catalysis from Explaining Machine-Learned
Quantum Mechanical and Molecular Mechanical Minimum Energy Pathways. ACS Phys. Chem. Au 2022, 2, 316–330. [CrossRef]
254. Brdar, S.; Panic, M.; Matavulj, P.; Stankovic, M.; Bartolic, D.; Sikoparija, B. Explainable AI for unveiling deep learning pollen
classification model based on fusion of scattered light patterns and fluorescence spectroscopy. Sci. Rep. 2023, 13, 3205. [CrossRef]
[PubMed]
255. Ullah, I.; Rios, A.; Gala, V.; Mckeever, S. Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance
Propagation. Appl. Sci. 2022, 12, 136. [CrossRef]
256. Dong, S.; Jin, Y.; Bak, S.; Yoon, B.; Jeong, J. Explainable Convolutional Neural Network to Investigate Age-Related Changes in
Multi-Order Functional Connectivity. Electronics 2021, 10, 3020. [CrossRef]
257. Althoff, D.; Bazame, H.C.; Nascimento, J.G. Untangling hybrid hydrological models with explainable artificial intelligence.
H2Open J. 2021, 4, 13–28. [CrossRef]
258. Tiensuu, H.; Tamminen, S.; Puukko, E.; Roening, J. Evidence-Based and Explainable Smart Decision Support for Quality
Improvement in Stainless Steel Manufacturing. Appl. Sci. 2021, 11, 10897. [CrossRef]
259. Messner, W. From black box to clear box: A hypothesis testing framework for scalar regression problems using deep artificial
neural networks. Appl. Soft Comput. 2023, 146, 110729. [CrossRef]
260. Allen, B. An interpretable machine learning model of cross-sectional US county-level obesity prevalence using explainable
artificial intelligence. PLoS ONE 2023, 18, e0292341. [CrossRef]
261. Ilman, M.M.; Yavuz, S.; Taser, P.Y. Generalized Input Preshaping Vibration Control Approach for Multi-Link Flexible Manipulators
using Machine Intelligence. Mechatronics 2022, 82, 102735. [CrossRef]
262. Aghaeipoor, F.; Javidi, M.M.; Fernandez, A. IFC-BD: An Interpretable Fuzzy Classifier for Boosting Explainable Artificial
Intelligence in Big Data. IEEE Trans. Fuzzy Syst. 2022, 30, 830–840. [CrossRef]
263. Zaman, M.; Hassan, A. Fuzzy Heuristics and Decision Tree for Classification of Statistical Feature-Based Control Chart Patterns.
Symmetry 2021, 13, 110. [CrossRef]
264. Fernandez, G.; Aledo, J.A.; Gamez, J.A.; Puerta, J.M. Factual and Counterfactual Explanations in Fuzzy Classification Trees. IEEE
Trans. Fuzzy Syst. 2022, 30, 5484–5495. [CrossRef]
265. Gkalelis, N.; Daskalakis, D.; Mezaris, V. ViGAT: Bottom-Up Event Recognition and Explanation in Video Using Factorized Graph
Attention Network. IEEE Access 2022, 10, 108797–108816. [CrossRef]
266. Singha, M.; Pu, L.; Srivastava, G.; Ni, X.; Stanfield, B.A.; Uche, I.K.; Rider, P.J.F.; Kousoulas, K.G.; Ramanujam, J.; Brylinski, M.
Unlocking the Potential of Kinase Targets in Cancer: Insights from CancerOmicsNet, an AI-Driven Approach to Drug Response
Prediction in Cancer. Cancers 2023, 15, 4050. [CrossRef] [PubMed]
267. Shang, Y.; Tian, Y.; Zhou, M.; Zhou, T.; Lyu, K.; Wang, Z.; Xin, R.; Liang, T.; Zhu, S.; Li, J. EHR-Oriented Knowledge Graph
System: Toward Efficient Utilization of Non-Used Information Buried in Routine Clinical Practice. IEEE J. Biomed. Health Inform.
2021, 25, 2463–2475. [CrossRef]
268. Espinoza, J.L.; Dupont, C.L.; O’Rourke, A.; Beyhan, S.; Morales, P.; Spoering, A.; Meyer, K.J.; Chan, A.P.; Choi, Y.; Nierman,
W.C.; et al. Predicting antimicrobial mechanism-of-action from transcriptomes: A generalizable explainable artificial intelligence
approach. PLoS Comput. Biol. 2021, 17, e1008857. [CrossRef]
269. Altini, N.; Puro, E.; Taccogna, M.G.; Marino, F.; De Summa, S.; Saponaro, C.; Mattioli, E.; Zito, F.A.; Bevilacqua, V. Tumor
Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability.
Bioengineering 2023, 10, 396. [CrossRef]
270. Huelsmann, J.; Barbosa, J.; Steinke, F. Local Interpretable Explanations of Energy System Designs. Energies 2023, 16, 2161.
[CrossRef]
Appl. Sci. 2024, 14, 8884 101 of 111

271. Misitano, G.; Afsar, B.; Larraga, G.; Miettinen, K. Towards explainable interactive multiobjective optimization: R-XIMO. Auton.
Agents-Multi-Agent Syst. 2022, 36, 43. [CrossRef]
272. Neghawi, E.; Liu, Y. Analysing Semi-Supervised ConvNet Model Performance with Computation Processes. Mach. Learn. Knowl.
Extr. 2023, 5, 1848–1876. [CrossRef]
273. Serradilla, O.; Zugasti, E.; Ramirez de Okariz, J.; Rodriguez, J.; Zurutuza, U. Adaptable and Explainable Predictive Maintenance:
Semi-Supervised Deep Learning for Anomaly Detection and Diagnosis in Press Machine Data. Appl. Sci. 2021, 11, 7376.
[CrossRef]
274. Lin, C.S.; Wang, Y.C.F. Describe, Spot and Explain: Interpretable Representation Learning for Discriminative Visual Reasoning.
IEEE Trans. Image Process. 2023, 32, 2481–2492. [CrossRef] [PubMed]
275. Mohamed, E.; Sirlantzis, K.; Howells, G.; Hoque, S. Optimisation of Deep Learning Small-Object Detectors with Novel Explainable
Verification. Sensors 2022, 22, 5596. [CrossRef]
276. Krenn, M.; Kottmann, J.S.; Tischler, N.; Aspuru-Guzik, A. Conceptual Understanding through Efficient Automated Design of
Quantum Optical Experiments. Phys. Rev. X 2021, 11, 031044. [CrossRef]
277. Podgorelec, V.; Kokol, P.; Stiglic, B.; Rozman, I. Decision trees: An overview and their use in medicine. J. Med Syst. 2002,
26, 445–463. [CrossRef]
278. Thrun, M.C. Exploiting Distance-Based Structures in Data Using an Explainable AI for Stock Picking. Information 2022, 13, 51.
[CrossRef]
279. Carta, S.M.; Consoli, S.; Piras, L.; Podda, A.S.; Recupero, D.R. Explainable Machine Learning Exploiting News and Domain-
Specific Lexicon for Stock Market Forecasting. IEEE Access 2021, 9, 30193–30205. [CrossRef]
280. Almohimeed, A.; Saleh, H.; Mostafa, S.; Saad, R.M.A.; Talaat, A.S. Cervical Cancer Diagnosis Using Stacked Ensemble Model and
Optimized Feature Selection: An Explainable Artificial Intelligence Approach. Computers 2023, 12, 200. [CrossRef]
281. Chen, Z.; Lian, Z.; Xu, Z. Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance
Computing. Axioms 2023, 12, 997. [CrossRef]
282. Leite, D.; Skrjanc, I.; Blazic, S.; Zdesar, A.; Gomide, F. Interval incremental learning of interval data streams and application to
vehicle tracking. Inf. Sci. 2023, 630, 1–22. [CrossRef]
283. Antoniou, G.; Papadakis, E.; Baryannis, G. Mental Health Diagnosis: A Case for Explainable Artificial Intelligence. Int. J. Artif.
Intell. Tools 2022, 31, 2241003. [CrossRef]
284. Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current challenges and future opportunities
for XAI in machine learning-based clinical decision support systems: A systematic review. Appl. Sci. 2021, 11, 5088. [CrossRef]
285. Qaffas, A.A.; Ben Hajkacem, M.A.; Ben Ncir, C.E.; Nasraoui, O. Interpretable Multi-Criteria ABC Analysis Based on Semi-
Supervised Clustering and Explainable Artificial Intelligence. IEEE Access 2023, 11, 43778–43792. [CrossRef]
286. Wickramasinghe, C.S.; Amarasinghe, K.; Marino, D.L.; Rieger, C.; Manic, M. Explainable Unsupervised Machine Learning for
Cyber-Physical Systems. IEEE Access 2021, 9, 131824–131843. [CrossRef]
287. Cui, Y.; Liu, T.; Che, W.; Chen, Z.; Wang, S. Teaching Machines to Read, Answer and Explain. IEEE-ACM Trans. Audio Speech
Lang. Process. 2022, 30, 1483–1492. [CrossRef]
288. Heuillet, A.; Couthouis, F.; Diaz-Rodriguez, N. Collective eXplainable AI: Explaining Cooperative Strategies and Agent
Contribution in Multiagent Reinforcement Learning with Shapley Values. IEEE Comput. Intell. Mag. 2022, 17, 59–71. [CrossRef]
289. Khanna, R.; Dodge, J.; Anderson, A.; Dikkala, R.; Irvine, J.; Shureih, Z.; Lam, K.H.; Matthews, C.R.; Lin, Z.; Kahng, M.; et al.
Finding Al’s Faults with AAR/AI An Empirical Study. ACM Trans. Interact. Intell. Syst. 2022, 12, 1. [CrossRef]
290. Klar, M.; Ruediger, P.; Schuermann, M.; Goeren, G.T.; Glatt, M.; Ravani, B.; Aurich, J.C. Explainable generative design in
manufacturing for reinforcement learning based factory layout planning. J. Manuf. Syst. 2024, 72, 74–92. [CrossRef]
291. Solis-Martin, D.; Galan-Paez, J.; Borrego-Diaz, J. On the Soundness of XAI in Prognostics and Health Management (PHM).
Information 2023, 14, 256. [CrossRef]
292. Mandler, H.; Weigand, B. Feature importance in neural networks as a means of interpretation for data-driven turbulence models.
Comput. Fluids 2023, 265, 105993. [CrossRef]
293. De Bosscher, B.C.D.; Ziabari, S.S.M.; Sharpanskykh, A. A comprehensive study of agent-based airport terminal operations using
surrogate modeling and simulation. Simul. Model. Pract. Theory 2023, 128, 102811. [CrossRef]
294. Wenninger, S.; Kaymakci, C.; Wiethe, C. Explainable long-term building energy consumption prediction using QLattice. Appl.
Energy 2022, 308, 118300. [CrossRef]
295. Schrills, T.; Franke, T. How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing
Awareness in Automated Insulin Delivery (AID) Systems. ACM Trans. Interact. Intell. Syst. 2023, 13, 25. [CrossRef]
296. Mehta, H.; Passi, K. Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI). Algorithms 2022, 15, 291.
[CrossRef]
297. Ge, W.; Wang, J.; Lin, T.; Tang, B.; Li, X. Explainable cyber threat behavior identification based on self-adversarial topic generation.
Comput. Secur. 2023, 132, 103369. [CrossRef]
298. Posada-Moreno, A.F.; Surya, N.; Trimpe, S. ECLAD: Extracting Concepts with Local Aggregated Descriptors. Pattern Recognit.
2024, 147, 110146. [CrossRef]
299. Zolanvari, M.; Yang, Z.; Khan, K.; Jain, R.; Meskin, N. TRUST XAI: Model-Agnostic Explanations for AI with a Case Study on
IIoT Security. IEEE Internet Things J. 2023, 10, 2967–2978. [CrossRef]
Appl. Sci. 2024, 14, 8884 102 of 111

300. Feng, J.; Wang, D.; Gu, Z. Bidirectional Flow Decision Tree for Reliable Remote Sensing Image Scene Classification. Remote. Sens.
2022, 14, 3943. [CrossRef]
301. Yin, S.; Li, H.; Sun, Y.; Ibrar, M.; Teng, L. Data Visualization Analysis Based on Explainable Artificial Intelligence: A Survey. IJLAI
Trans. Sci. Eng. 2024, 2, 13–20.
302. Meskauskas, Z.; Kazanavicius, E. About the New Methodology and XAI-Based Software Toolkit for Risk Assessment. Sustainability
2022, 14, 5496. [CrossRef]
303. Leem, S.; Oh, J.; So, D.; Moon, J. Towards Data-Driven Decision-Making in the Korean Film Industry: An XAI Model for Box
Office Analysis Using Dimension Reduction, Clustering, and Classification. Entropy 2023, 25, 571. [CrossRef]
304. Ayoub, O.; Troia, S.; Andreoletti, D.; Bianco, A.; Tornatore, M.; Giordano, S.; Rottondi, C. Towards explainable artificial intelligence
in optical networks: The use case of lightpath QoT estimation. J. Opt. Commun. Netw. 2023, 15, A26–A38. [CrossRef]
305. Aguilar, D.L.; Medina-Perez, M.A.; Loyola-Gonzalez, O.; Choo, K.K.R.; Bucheli-Susarrey, E. Towards an Interpretable Autoen-
coder: A Decision-Tree-Based Autoencoder and its Application in Anomaly Detection. IEEE Trans. Dependable Secur. Comput.
2023, 20, 1048–1059. [CrossRef]
306. del Castillo Torres, G.; Francesca Roig-Maimo, M.; Mascaro-Oliver, M.; Amengual-Alcover, E.; Mas-Sanso, R. Understanding
How CNNs Recognize Facial Expressions: A Case Study with LIME and CEM. Sensors 2023, 23, 131. [CrossRef] [PubMed]
307. Dewi, C.; Chen, R.C.; Yu, H.; Jiang, X. XAI for Image Captioning using SHAP. J. Inf. Sci. Eng. 2023, 39, 711–724. [CrossRef]
308. Alkhalaf, S.; Alturise, F.; Bahaddad, A.A.; Elnaim, B.M.E.; Shabana, S.; Abdel-Khalek, S.; Mansour, R.F. Adaptive Aquila Optimizer
with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers 2023, 15, 1492. [CrossRef]
309. Nascita, A.; Montieri, A.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescape, A. XAI Meets Mobile Traffic Classification: Understanding
and Improving Multimodal Deep Learning Architectures. IEEE Trans. Netw. Serv. Manag. 2021, 18, 4225–4246. [CrossRef]
310. Silva-Aravena, F.; Delafuente, H.N.; Gutierrez-Bahamondes, J.H.; Morales, J. A Hybrid Algorithm of ML and XAI to Prevent
Breast Cancer: A Strategy to Support Decision Making. Cancers 2023, 15, 2443. [CrossRef] [PubMed]
311. Bjorklund, A.; Henelius, A.; Oikarinen, E.; Kallonen, K.; Puolamaki, K. Explaining any black box model using real data. Front.
Comput. Sci. 2023, 5, 1143904. [CrossRef]
312. Dobrovolskis, A.; Kazanavicius, E.; Kizauskiene, L. Building XAI-Based Agents for IoT Systems. Appl. Sci. 2023, 13, 4040.
[CrossRef]
313. Perl, M.; Sun, Z.; Machlev, R.; Belikov, J.; Levy, K.Y.; Levron, Y. PMU placement for fault line location using neural additive
models-A global XAI technique. Int. J. Electr. Power Energy Syst. 2024, 155, 109573. [CrossRef]
314. Nwafor, O.; Okafor, E.; Aboushady, A.A.; Nwafor, C.; Zhou, C. Explainable Artificial Intelligence for Prediction of Non-Technical
Losses in Electricity Distribution Networks. IEEE Access 2023, 11, 73104–73115. [CrossRef]
315. Panagoulias, D.P.; Sarmas, E.; Marinakis, V.; Virvou, M.; Tsihrintzis, G.A.; Doukas, H. Intelligent Decision Support for Energy
Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics. Electronics 2023, 12, 4430. [CrossRef]
316. Kim, S.; Choo, S.; Park, D.; Park, H.; Nam, C.S.; Jung, J.Y.; Lee, S. Designing an XAI interface for BCI experts: A contextual design
for pragmatic explanation interface based on domain knowledge in a specific context. Int. J.-Hum.-Comput. Stud. 2023, 174,
103009. [CrossRef]
317. Wang, Z.; Joe, I. OISE: Optimized Input Sampling Explanation with a Saliency Map Based on the Black-Box Model. Appl. Sci.
2023, 13, 5886. [CrossRef]
318. Puechmorel, S. Pullback Bundles and the Geometry of Learning. Entropy 2023, 25, 1450. [CrossRef]
319. Machlev, R.; Perl, M.; Belikov, J.; Levy, K.Y.; Levron, Y. Measuring Explainability and Trustworthiness of Power Quality
Disturbances Classifiers Using XAI-Explainable Artificial Intelligence. IEEE Trans. Ind. Inform. 2022, 18, 5127–5137. [CrossRef]
320. Monteiro, W.R.; Reynoso-Meza, G. A multi-objective optimization design to generate surrogate machine learning models in
explainable artificial intelligence applications. Euro J. Decis. Process. 2023, 11, 100040. [CrossRef]
321. Shi, J.; Zou, W.; Zhang, C.; Tan, L.; Zou, Y.; Peng, Y.; Huo, W. CAMFuzz: Explainable Fuzzing with Local Interpretation.
Cybersecurity 2022, 5, 17. [CrossRef]
322. Igarashi, D.; Yee, J.; Yokoyama, Y.; Kusuno, H.; Tagawa, Y. The effects of secondary cavitation position on the velocity of a
laser-induced microjet extracted using explainable artificial intelligence. Phys. Fluids 2024, 36, 013317. [CrossRef]
323. Soto, J.L.; Uriguen, E.Z.; Garcia, X.D.C. Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using
Autoencoders. Appl. Sci. 2023, 13, 2912. [CrossRef]
324. Han, J.; Lee, Y. Explainable Artificial Intelligence-Based Competitive Factor Identification. ACM Trans. Knowl. Discov. Data 2022,
16, 10. [CrossRef]
325. Hasan, M.; Lu, M. Enhanced model tree for quantifying output variances due to random data sampling: Productivity prediction
applications. Autom. Constr. 2024, 158, 105218. [CrossRef]
326. Sajjad, U.; Hussain, I.; Hamid, K.; Ali, H.M.; Wang, C.C.; Yan, W.M. Liquid-to-vapor phase change heat transfer evaluation and
parameter sensitivity analysis of nanoporous surface coatings. Int. J. Heat Mass Transf. 2022, 194, 123088. [CrossRef]
327. Ravi, S.K.; Roy, I.; Roychowdhury, S.; Feng, B.; Ghosh, S.; Reynolds, C.; Umretiya, R.V.; Rebak, R.B.; Hoffman, A.K. Elucidating
precipitation in FeCrAl alloys through explainable AI: A case study. Comput. Mater. Sci. 2023, 230, 112440. [CrossRef]
328. Sauter, D.; Lodde, G.; Nensa, F.; Schadendorf, D.; Livingstone, E.; Kukuk, M. Validating Automatic Concept-Based Explanations
for AI-Based Digital Histopathology. Sensors 2022, 22, 5346. [CrossRef]
Appl. Sci. 2024, 14, 8884 103 of 111

329. Akilandeswari, P.; Eliazer, M.; Patil, R. Explainable AI-Reducing Costs, Finding the Optimal Path between Graphical Locations.
Int. J. Early Child. Spec. Educ. 2022, 14, 504–511. [CrossRef]
330. Aghaeipoor, F.; Sabokrou, M.; Fernandez, A. Fuzzy Rule-Based Explainer Systems for Deep Neural Networks: From Local
Explainability to Global Understanding. IEEE Trans. Fuzzy Syst. 2023, 31, 3069–3080. [CrossRef]
331. Lee, E.H.; Kim, H. Feature-Based Interpretation of the Deep Neural Network. Electronics 2021, 10, 2687. [CrossRef]
332. Hung, S.C.; Wu, H.C.; Tseng, M.H. Integrating Image Quality Enhancement Methods and Deep Learning Techniques for Remote
Sensing Scene Classification. Appl. Sci. 2021, 11, 1659. [CrossRef]
333. Heistrene, L.; Machlev, R.; Perl, M.; Belikov, J.; Baimel, D.; Levy, K.; Mannor, S.; Levron, Y. Explainability-based Trust Algorithm
for electricity price forecasting models. Energy AI 2023, 14, 100259. [CrossRef]
334. Ribeiro, D.; Matos, L.M.; Moreira, G.; Pilastri, A.; Cortez, P. Isolation Forests and Deep Autoencoders for Industrial Screw
Tightening Anomaly Detection. Computers 2022, 11, 54. [CrossRef]
335. Blomerus, N.; Cilliers, J.; Nel, W.; Blasch, E.; de Villiers, P. Feedback-Assisted Automatic Target and Clutter Discrimination
Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications. Remote. Sens. 2022, 14, 96.
[CrossRef]
336. Estivill-Castro, V.; Gilmore, E.; Hexel, R. Constructing Explainable Classifiers from the Start-Enabling Human-in-the Loop
Machine Learning. Information 2022, 13, 464. [CrossRef]
337. Angelotti, G.; Diaz-Rodriguez, N. Towards a more efficient computation of individual attribute and policy contribution for
post-hoc explanation of cooperative multi-agent systems using Myerson values. Knowl.-Based Syst. 2023, 260, 110189. [CrossRef]
338. Tang, R.; Liu, N.; Yang, F.; Zou, N.; Hu, X. Defense Against Explanation Manipulation. Front. Big Data 2022, 5, 704203. [CrossRef]
[PubMed]
339. Al-Sakkari, E.G.; Ragab, A.; So, T.M.Y.; Shokrollahi, M.; Dagdougui, H.; Navarri, P.; Elkamel, A.; Amazouz, M. Machine
learning-assisted selection of adsorption-based carbon dioxide capture materials. J. Environ. Chem. Eng. 2023, 11, 110732.
[CrossRef]
340. Apostolopoulos, I.D.; Apostolopoulos, D.J.; Papathanasiou, N.D. Deep Learning Methods to Reveal Important X-ray Features in
COVID-19 Detection: Investigation of Explainability and Feature Reproducibility. Reports 2022, 5, 20. [CrossRef]
341. Deramgozin, M.M.; Jovanovic, S.; Arevalillo-Herraez, M.; Ramzan, N.; Rabah, H. Attention-Enabled Lightweight Neural Network
Architecture for Detection of Action Unit Activation. IEEE Access 2023, 11, 117954–117970. [CrossRef]
342. Dassanayake, P.M.; Anjum, A.; Bashir, A.K.; Bacon, J.; Saleem, R.; Manning, W. A Deep Learning Based Explainable Control
System for Reconfigurable Networks of Edge Devices. IEEE Trans. Netw. Sci. Eng. 2022, 9, 7–19. [CrossRef]
343. Qayyum, F.; Khan, M.A.; Kim, D.H.; Ko, H.; Ryu, G.A. Explainable AI for Material Property Prediction Based on Energy Cloud:
A Shapley-Driven Approach. Materials 2023, 16, 7322. [CrossRef]
344. Lellep, M.; Prexl, J.; Eckhardt, B.; Linkmann, M. Interpreted machine learning in fluid dynamics: Explaining relaminarisation
events in wall-bounded shear flows. J. Fluid Mech. 2022, 942, A2. [CrossRef]
345. Bilc, S.; Groza, A.; Muntean, G.; Nicoara, S.D. Interleaving Automatic Segmentation and Expert Opinion for Retinal Conditions.
Diagnostics 2022, 12, 22. [CrossRef] [PubMed]
346. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al.
Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines
2022, 10, 551. [CrossRef] [PubMed]
347. Terzi, D.S.; Demirezen, U.; Sagiroglu, S. Explainable Credit Card Fraud Detection with Image Conversion. Adcaij-Adv. Distrib.
Comput. Artif. Intell. J. 2021, 10, 63–76. [CrossRef]
348. Kothadiya, D.R.; Bhatt, C.M.; Rehman, A.; Alamri, F.S.; Saba, T. SignExplainer: An Explainable AI-Enabled Framework for Sign
Language Recognition with Ensemble Learning. IEEE Access 2023, 11, 47410–47419. [CrossRef]
349. Slijepcevic, D.; Zeppelzauer, M.; Unglaube, F.; Kranzl, A.; Breiteneder, C.; Horsak, B. Explainable Machine Learning in Human
Gait Analysis: A Study on Children with Cerebral Palsy. IEEE Access 2023, 11, 65906–65923. [CrossRef]
350. Hwang, C.; Lee, T. E-SFD: Explainable Sensor Fault Detection in the ICS Anomaly Detection System. IEEE Access 2021,
9, 140470–140486. [CrossRef]
351. Rivera, A.J.; Munoz, J.C.; Perez-Goody, M.D.; de San Pedro, B.S.; Charte, F.; Elizondo, D.; Rodriguez, C.; Abolafia, M.L.; Perea, A.;
del Jesus, M.J. XAIRE: An ensemble-based methodology for determining the relative importance of variables in regression tasks.
Application to a hospital emergency department. Artif. Intell. Med. 2023, 137, 102494. [CrossRef]
352. Park, J.J.; Lee, S.; Shin, S.; Kim, M.; Park, J. Development of a Light and Accurate Nox Prediction Model for Diesel Engines Using
Machine Learning and Xai Methods. Int. J. Automot. Technol. 2023, 24, 559–571. [CrossRef]
353. Abdollahi, A.; Pradhan, B. Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI). Sensors 2021, 21, 4738.
[CrossRef]
354. Xie, Y.; Pongsakornsathien, N.; Gardi, A.; Sabatini, R. Explanation of Machine-Learning Solutions in Air-Traffic Management.
Aerospace 2021, 8, 224. [CrossRef]
355. Al-Hawawreh, M.; Moustafa, N. Explainable deep learning for attack intelligence and combating cyber-physical attacks. Ad Hoc
Netw. 2024, 153, 103329. [CrossRef]
356. Srisuchinnawong, A.; Homchanthanakul, J.; Manoonpong, P. NeuroVis: Real-Time Neural Information Measurement and
Visualization of Embodied Neural Systems. Front. Neural Circuits 2021, 15, 743101. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 104 of 111

357. Dai, B.; Shen, X.; Chen, L.Y.; Li, C.; Pan, W. Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed
Interpretation. Ann. Appl. Stat. 2023, 17, 2019–2038. [CrossRef]
358. Li, Z. Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and
XGBoost. Comput. Environ. Urban Syst. 2022, 96, 101845. [CrossRef]
359. Gonzalez-Gonzalez, J.; Garcia-Mendez, S.; De Arriba-Perez, F.; Gonzalez-Castano, F.J.; Barba-Seara, O. Explainable Automatic
Industrial Carbon Footprint Estimation from Bank Transaction Classification Using Natural Language Processing. IEEE Access
2022, 10, 126326–126338. [CrossRef]
360. Elayan, H.; Aloqaily, M.; Karray, F.; Guizani, M. Internet of Behavior and Explainable AI Systems for Influencing IoT Behavior.
IEEE Netw. 2023, 37, 62–68. [CrossRef]
361. Cheng, X.; Doosthosseini, A.; Kunkel, J. Improve the Deep Learning Models in Forestry Based on Explanations and Expertise.
Front. Plant Sci. 2022, 13, 902105. [CrossRef] [PubMed]
362. Qiu, W.; Chen, H.; Kaeberlein, M.; Lee, S.I. ExplaiNAble BioLogical Age (ENABL Age): An artificial intelligence framework for
interpretable biological age. Lancet Healthy Longev. 2023, 4, E711–E723. [CrossRef]
363. Abba, S.I.; Yassin, M.A.; Mubarak, A.S.; Shah, S.M.H.; Usman, J.; Oudah, A.Y.; Naganna, S.R.; Aljundi, I.H. Drinking Water
Resources Suitability Assessment Based on Pollution Index of Groundwater Using Improved Explainable Artificial Intelligence.
Sustainability 2023, 15, 5655. [CrossRef]
364. Martinez-Seras, A.; Del Ser, J.; Lobo, J.L.; Garcia-Bringas, P.; Kasabov, N. A novel Out-of-Distribution detection approach for
Spiking Neural Networks: Design, fusion, performance evaluation and explainability. Inf. Fusion 2023, 100, 101943. [CrossRef]
365. Krupp, L.; Wiede, C.; Friedhoff, J.; Grabmaier, A. Explainable Remaining Tool Life Prediction for Individualized Production
Using Automated Machine Learning. Sensors 2023, 23, 8523. [CrossRef] [PubMed]
366. Nayebi, A.; Tipirneni, S.; Reddy, C.K.; Foreman, B.; Subbian, V. WindowSHAP: An efficient framework for explaining time-series
classifiers based on Shapley values. J. Biomed. Inform. 2023, 144, 104438. [CrossRef] [PubMed]
367. Lee, J.; Jeong, J.; Jung, S.; Moon, J.; Rho, S. Verification of De-Identification Techniques for Personal Information Using Tree-Based
Methods with Shapley Values. J. Pers. Med. 2022, 12, 190. [CrossRef]
368. Nahiduzzaman, M.; Chowdhury, M.E.H.; Salam, A.; Nahid, E.; Ahmed, F.; Al-Emadi, N.; Ayari, M.A.; Khandakar, A.; Haider, J.
Explainable deep learning model for automatic mulberry leaf disease classification. Front. Plant Sci. 2023, 14, 1175515. [CrossRef]
[PubMed]
369. Khan, A.; Ul Haq, I.; Hussain, T.; Muhammad, K.; Hijji, M.; Sajjad, M.; De Albuquerque, V.H.C.; Baik, S.W. PMAL: A Proxy Model
Active Learning Approach for Vision Based Industrial Applications. ACM Trans. Multimed. Comput. Commun. Appl. 2022, 18, 123.
[CrossRef]
370. Beucher, A.; Rasmussen, C.B.; Moeslund, T.B.; Greve, M.H. Interpretation of Convolutional Neural Networks for Acid Sulfate
Soil Classification. Front. Environ. Sci. 2022, 9, 809995. [CrossRef]
371. Kui, B.; Pinter, J.; Molontay, R.; Nagy, M.; Farkas, N.; Gede, N.; Vincze, A.; Bajor, J.; Godi, S.; Czimmer, J.; et al. EASY-APP: An
artificial intelligence model and application for early and easy prediction of severity in acute pancreatitis. Clin. Transl. Med. 2022,
12, e842. [CrossRef]
372. Szandala, T. Unlocking the black box of CNNs: Visualising the decision-making process with PRISM. Inf. Sci. 2023, 642, 119162.
[CrossRef]
373. Rengasamy, D.; Rothwell, B.C.; Figueredo, G.P. Towards a More Reliable Interpretation of Machine Learning Outputs for
Safety-Critical Systems Using Feature Importance Fusion. Appl. Sci. 2021, 11, 1854. [CrossRef]
374. Jahin, M.A.; Shovon, M.S.H.; Islam, M.S.; Shin, J.; Mridha, M.F.; Okuyama, Y. QAmplifyNet: Pushing the boundaries of supply
chain backorder prediction using interpretable hybrid quantum-classical neural network. Sci. Rep. 2023, 13, 18246. [CrossRef]
[PubMed]
375. Nielsen, I.E.; Ramachandran, R.P.; Bouaynaya, N.; Fathallah-Shaykh, H.M.; Rasool, G. EvalAttAI: A Holistic Approach to
Evaluating Attribution Maps in Robust and Non-Robust Models. IEEE Access 2023, 11, 82556–82569. [CrossRef]
376. Hashem, H.A.; Abdulazeem, Y.; Labib, L.M.; Elhosseini, M.A.; Shehata, M. An Integrated Machine Learning-Based Brain
Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. Sensors 2023, 23, 3171. [CrossRef] [PubMed]
377. Lin, R.; Wichadakul, D. Interpretable Deep Learning Model Reveals Subsequences of Various Functions for Long Non-Coding
RNA Identification. Front. Genet. 2022, 13, 876721. [CrossRef]
378. Chen, H.; Yang, L.; Wu, Q. Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning
Approach Using Google Earth Engine. Remote. Sens. 2023, 15, 4585. [CrossRef]
379. Oveis, A.H.; Giusti, E.; Ghio, S.; Meucci, G.; Martorella, M. LIME-Assisted Automatic Target Recognition with SAR Images:
Toward Incremental Learning and Explainability. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 9175–9192. [CrossRef]
380. Llorca-Schenk, J.; Rico-Juan, J.R.; Sanchez-Lozano, M. Designing porthole aluminium extrusion dies on the basis of eXplainable
Artificial Intelligence. Expert Syst. Appl. 2023, 222, 119808. [CrossRef]
381. Diaz, G.M.; Hernandez, J.J.G.; Salvador, J.L.G. Analyzing Employee Attrition Using Explainable AI for Strategic HR Decision-
Making. Mathematics 2023, 11, 4677. [CrossRef]
382. Pelaez-Rodriguez, C.; Marina, C.M.; Perez-Aracil, J.; Casanova-Mateo, C.; Salcedo-Sanz, S. Extreme Low-Visibility Events
Prediction Based on Inductive and Evolutionary Decision Rules: An Explicability-Based Approach. Atmosphere 2023, 14, 542.
[CrossRef]
Appl. Sci. 2024, 14, 8884 105 of 111

383. An, J.; Zhang, Y.; Joe, I. Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models. Appl. Sci. 2023, 13,
8782. [CrossRef]
384. Glick, A.; Clayton, M.; Angelov, N.; Chang, J. Impact of explainable artificial intelligence assistance on clinical decision-making of
novice dental clinicians. JAMIA Open 2022, 5, ooac031. [CrossRef] [PubMed]
385. Qureshi, Y.M.; Voloshin, V.; Facchinelli, L.; McCall, P.J.; Chervova, O.; Towers, C.E.; Covington, J.A.; Towers, D.P. Finding a
Husband: Using Explainable AI to Define Male Mosquito Flight Differences. Biology 2023, 12, 496. [CrossRef] [PubMed]
386. Wen, B.; Wang, N.; Subbalakshmi, K.; Chandramouli, R. Revealing the Roles of Part-of-Speech Taggers in Alzheimer Disease
Detection: Scientific Discovery Using One-Intervention Causal Explanation. JMIR Form. Res. 2023, 7, e36590. [CrossRef]
[PubMed]
387. Alvey, B.; Anderson, D.; Keller, J.; Buck, A. Linguistic Explanations of Black Box Deep Learning Detectors on Simulated Aerial
Drone Imagery. Sensors 2023, 23, 6879. [CrossRef] [PubMed]
388. Hou, B.; Gao, J.; Guo, X.; Baker, T.; Zhang, Y.; Wen, Y.; Liu, Z. Mitigating the Backdoor Attack by Federated Filters for Industrial
IoT Applications. IEEE Trans. Ind. Inform. 2022, 18, 3562–3571. [CrossRef]
389. Nakagawa, P.I.; Pires, L.F.; Moreira, J.L.R.; Santos, L.O.B.d.S.; Bukhsh, F. Semantic Description of Explainable Machine Learning
Workflows for Improving Trust. Appl. Sci. 2021, 11, 804. [CrossRef]
390. Yang, M.; Moon, J.; Yang, S.; Oh, H.; Lee, S.; Kim, Y.; Jeong, J. Design and Implementation of an Explainable Bidirectional LSTM
Model Based on Transition System Approach for Cooperative AI-Workers. Appl. Sci. 2022, 12, 6390. [CrossRef]
391. O’Shea, R.; Manickavasagar, T.; Horst, C.; Hughes, D.; Cusack, J.; Tsoka, S.; Cook, G.; Goh, V. Weakly supervised segmentation
models as explainable radiological classifiers for lung tumour detection on CT images. Insights Imaging 2023, 14, 195. [CrossRef]
[PubMed]
392. Tasnim, N.; Al Mamun, S.; Shahidul Islam, M.; Kaiser, M.S.; Mahmud, M. Explainable Mortality Prediction Model for Congestive
Heart Failure with Nature-Based Feature Selection Method. Appl. Sci. 2023, 13, 6138. [CrossRef]
393. Marques-Silva, J.; Ignatiev, A. No silver bullet: Interpretable ML models must be explained. Front. Artif. Intell. 2023, 6, 1128212.
[CrossRef] [PubMed]
394. Pedraza, A.; del Rio, D.; Bautista-Juzgado, V.; Fernandez-Lopez, A.; Sanz-Andres, A. Study of the Feasibility of Decoupling
Temperature and Strain from a f -PA-OFDR over an SMF Using Neural Networks. Sensors 2023, 23, 5515. [CrossRef] [PubMed]
395. Kwon, S.; Lee, Y. Explainability-Based Mix-Up Approach for Text Data Augmentation. ACM Trans. Knowl. Discov. Data 2023,
17, 13. [CrossRef]
396. Rosenberg, G.; Brubaker, J.K.; Schuetz, M.J.A.; Salton, G.; Zhu, Z.; Zhu, E.Y.; Kadioglu, S.; Borujeni, S.E.; Katzgraber, H.G.
Explainable Artificial Intelligence Using Expressive Boolean Formulas. Mach. Learn. Knowl. Extr. 2023, 5, 1760–1795. [CrossRef]
397. O’Sullivan, C.M.; Deo, R.C.; Ghahramani, A. Explainable AI approach with original vegetation data classifies spatio-temporal
nitrogen in flows from ungauged catchments to the Great Barrier Reef. Sci. Rep. 2023, 13, 18145. [CrossRef]
398. Richter, Y.; Balal, N.; Pinhasi, Y. Neural-Network-Based Target Classification and Range Detection by CW MMW Radar. Remote.
Sens. 2023, 15, 4553. [CrossRef]
399. Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021,
9, 28272–28281. [CrossRef]
400. Murala, D.K.; Panda, S.K.; Dash, S.P. MedMetaverse: Medical Care of Chronic Disease Patients and Managing Data Using
Artificial Intelligence, Blockchain, and Wearable Devices State-of-the-Art Methodology. IEEE Access 2023, 11, 138954–138985.
[CrossRef]
401. Brakefield, W.S.; Ammar, N.; Shaban-Nejad, A. An Urban Population Health Observatory for Disease Causal Pathway Analysis
and Decision Support: Underlying Explainable Artificial Intelligence Model. JMIR Form. Res. 2022, 6, e36055. [CrossRef]
402. Ortega, A.; Fierrez, J.; Morales, A.; Wang, Z.; de la Cruz, M.; Alonso, C.L.; Ribeiro, T. Symbolic AI for XAI: Evaluating LFIT
Inductive Programming for Explaining Biases in Machine Learning. Computers 2021, 10, 154. [CrossRef]
403. An, J.; Joe, I. Attention Map-Guided Visual Explanations for Deep Neural Networks. Appl. Sci. 2022, 12, 3846. [CrossRef]
404. Huang, X.; Sun, Y.; Feng, S.; Ye, Y.; Li, X. Better Visual Interpretation for Remote Sensing Scene Classification. IEEE Geosci. Remote.
Sens. Lett. 2022, 19, 6504305. [CrossRef]
405. Senocak, A.U.G.; Yilmaz, M.T.; Kalkan, S.; Yucel, I.; Amjad, M. An explainable two-stage machine learning approach for
precipitation forecast. J. Hydrol. 2023, 627, 130375. [CrossRef]
406. Kalutharage, C.S.; Liu, X.; Chrysoulas, C.; Pitropakis, N.; Papadopoulos, P. Explainable AI-Based DDOS Attack Identification
Method for IoT Networks. Computers 2023, 12, 32. [CrossRef]
407. Sorayaie Azar, A.; Naemi, A.; Babaei Rikan, S.; Mohasefi, J.B.; Pirnejad, H.; Wiil, U.K. Monkeypox detection using deep neural
networks. BMC Infect. Dis. 2023, 23, 438. [CrossRef] [PubMed]
408. Di Stefano, V.; Prinzi, F.; Luigetti, M.; Russo, M.; Tozza, S.; Alonge, P.; Romano, A.; Sciarrone, M.A.; Vitali, F.; Mazzeo, A.; et al.
Machine Learning for Early Diagnosis of ATTRv Amyloidosis in Non-Endemic Areas: A Multicenter Study from Italy. Brain Sci.
2023, 13, 805. [CrossRef] [PubMed]
409. Huong, T.T.; Bac, T.P.; Ha, K.N.; Hoang, N.V.; Hoang, N.X.; Hung, N.T.; Tran, K.P. Federated Learning-Based Explainable
Anomaly Detection for Industrial Control Systems. IEEE Access 2022, 10, 53854–53872. [CrossRef]
410. Diefenbach, S.; Christoforakos, L.; Ullrich, D.; Butz, A. Invisible but Understandable: In Search of the Sweet Spot between
Technology Invisibility and Transparency in Smart Spaces and Beyond. Multimodal Technol. Interact. 2022, 6, 95. [CrossRef]
Appl. Sci. 2024, 14, 8884 106 of 111

411. Patel, J.; Amipara, C.; Ahanger, T.A.; Ladhva, K.; Gupta, R.K.; Alsaab, H.O.O.; Althobaiti, Y.S.S.; Ratna, R. A Machine Learning-
Based Water Potability Prediction Model by Using Synthetic Minority Oversampling Technique and Explainable AI. Comput.
Intell. Neurosci. 2022, 2022, 9283293. [CrossRef]
412. Kim, J.K.; Lee, K.; Hong, S.G. Cognitive Load Recognition Based on T-Test and SHAP from Wristband Sensors. Hum.-Centric
Comput. Inf. Sci. 2023, 13. [CrossRef]
413. Schroeder, M.; Zamanian, A.; Ahmidi, N. What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep
Time Series Classification. Mach. Learn. Knowl. Extr. 2023, 5, 539–559. [CrossRef]
414. Singh, A.; Pannu, H.; Malhi, A. Explainable Information Retrieval using Deep Learning for Medical images. Comput. Sci. Inf.
Syst. 2022, 19, 277–307. [CrossRef]
415. Kumara, I.; Ariz, M.H.; Chhetri, M.B.; Mohammadi, M.; Van Den Heuvel, W.J.; Tamburri, D.A. FOCloud: Feature Model Guided
Performance Prediction and Explanation for Deployment Configurable Cloud Applications. IEEE Trans. Serv. Comput. 2023,
16, 302–314. [CrossRef]
416. Konforti, Y.; Shpigler, A.; Lerner, B.; Bar-Hillel, A. SIGN: Statistical Inference Graphs Based on Probabilistic Network Activity
Interpretation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3783–3797. [CrossRef] [PubMed]
417. Oblak, T.; Haraksim, R.; Beslay, L.; Peer, P. Probabilistic Fingermark Quality Assessment with Quality Region Localisation.
Sensors 2023, 23, 4006. [CrossRef]
418. Le, T.T.H.; Kang, H.; Kim, H. Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial
Images with Different Patch Sizes and Perturbation Ratios. IEEE Access 2021, 9, 133049–133061. [CrossRef]
419. Capuozzo, S.; Gravina, M.; Gatta, G.; Marrone, S.; Sansone, C. A Multimodal Knowledge-Based Deep Learning Approach for
MGMT Promoter Methylation Identification. J. Imaging 2022, 8, 321. [CrossRef] [PubMed]
420. Vo, H.T.; Thien, N.N.; Mui, K.C. A Deep Transfer Learning Approach for Accurate Dragon Fruit Ripeness Classification and
Visual Explanation using Grad-CAM. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1344–1352. [CrossRef]
421. Artelt, A.; Hammer, B. Efficient computation of counterfactual explanations and counterfactual metrics of prototype-based
classifiers. Neurocomputing 2022, 470, 304–317. [CrossRef]
422. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Adaptive Sparse Representation of Continuous Input for Tsetlin Machines Based
on Stochastic Searching on the Line. Electronics 2021, 10, 2107. [CrossRef]
423. Pandiyan, V.; Wrobel, R.; Leinenbach, C.; Shevchik, S. Optimizing in-situ monitoring for laser powder bed fusion process:
Deciphering acoustic emission and sensor sensitivity with explainable machine learning. J. Mater. Process. Technol. 2023,
321, 118144. [CrossRef]
424. Jeon, M.; Kim, T.; Kim, S.; Lee, C.; Youn, C.H. Recursive Visual Explanations Mediation Scheme Based on DropAttention Model
with Multiple Episodes Pool. IEEE Access 2023, 11, 4306–4321. [CrossRef]
425. Jia, B.; Qiao, W.; Zong, Z.; Liu, S.; Hijji, M.; Del Ser, J.; Muhammadh, K. A fingerprint-based localization algorithm based on
LSTM and data expansion method for sparse samples. Future Gener. Comput.-Syst.- Int. J. Escience 2022, 137, 380–393. [CrossRef]
426. Munkhdalai, L.; Munkhdalai, T.; Pham, V.H.; Hong, J.E.; Ryu, K.H.; Theera-Umpon, N. Neural Network-Augmented Locally
Adaptive Linear Regression Model for Tabular Data. Sustainability 2022, 14, 5273. [CrossRef]
427. Gouabou, A.C.F.; Collenne, J.; Monnier, J.; Iguernaissi, R.; Damoiseaux, J.L.; Moudafi, A.; Merad, D. Computer Aided Diagnosis
of Melanoma Using Deep Neural Networks and Game Theory: Application on Dermoscopic Images of Skin Lesions. Int. J. Mol.
Sci. 2022, 23, 3838. [CrossRef] [PubMed]
428. Abeyrathna, K.D.; Granmo, O.C.; Goodwin, M. Extending the Tsetlin Machine with Integer-Weighted Clauses for Increased
Interpretability. IEEE Access 2021, 9, 8233–8248. [CrossRef]
429. Nagaoka, T.; Kozuka, T.; Yamada, T.; Habe, H.; Nemoto, M.; Tada, M.; Abe, K.; Handa, H.; Yoshida, H.; Ishii, K.; et al. A Deep
Learning System to Diagnose COVID-19 Pneumonia Using Masked Lung CT Images to Avoid AI-generated COVID-19 Diagnoses
that Include Data outside the Lungs. Adv. Biomed. Eng. 2022, 11, 76–86. [CrossRef]
430. Ali, S.; Hussain, A.; Bhattacharjee, S.; Athar, A.; Abdullah, A.; Kim, H.C. Detection of COVID-19 in X-ray Images Using Densely
Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box
Model. Sensors 2022, 22, 9983. [CrossRef]
431. Elbagoury, B.M.; Vladareanu, L.; Vladareanu, V.; Salem, A.B.; Travediu, A.M.; Roushdy, M.I. A Hybrid Stacked CNN and
Residual Feedback GMDH-LSTM Deep Learning Model for Stroke Prediction Applied on Mobile AI Smart Hospital Platform.
Sensors 2023, 23, 3500. [CrossRef] [PubMed]
432. Yuan, L.; Andrews, J.; Mu, H.; Vakil, A.; Ewing, R.; Blasch, E.; Li, J. Interpretable Passive Multi-Modal Sensor Fusion for Human
Identification and Activity Recognition. Sensors 2022, 22, 5787. [CrossRef]
433. Someetheram, V.; Marsani, M.F.; Mohd Kasihmuddin, M.S.; Zamri, N.E.; Muhammad Sidik, S.S.; Mohd Jamaludin, S.Z.; Mansor,
M.A. Random Maximum 2 Satisfiability Logic in Discrete Hopfield Neural Network Incorporating Improved Election Algorithm.
Mathematics 2022, 10, 4734. [CrossRef]
434. Sudars, K.; Namatevs, I.; Ozols, K. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-
Based Explainability Approach. J. Imaging 2022, 8, 30. [CrossRef] [PubMed]
435. Aslam, N.; Khan, I.U.; Bader, S.A.; Alansari, A.; Alaqeel, L.A.; Khormy, R.M.; Alkubaish, Z.A.; Hussain, T. Explainable
Classification Model for Android Malware Analysis Using API and Permission-Based Features. CMC-Comput. Mater. Contin.
2023, 76, 3167–3188. [CrossRef]
Appl. Sci. 2024, 14, 8884 107 of 111

436. Shin, C.Y.; Park, J.T.; Baek, U.J.; Kim, M.S. A Feasible and Explainable Network Traffic Classifier Utilizing DistilBERT. IEEE Access
2023, 11, 70216–70237. [CrossRef]
437. Samir, M.; Sherief, N.; Abdelmoez, W. Improving Bug Assignment and Developer Allocation in Software Engineering through
Interpretable Machine Learning Models. Computers 2023, 12, 128. [CrossRef]
438. Guidotti, R.; D’Onofrio, M. Matrix Profile-Based Interpretable Time Series Classifier. Front. Artif. Intell. 2021, 4, 699448. [CrossRef]
439. Ekanayake, I.U.; Palitha, S.; Gamage, S.; Meddage, D.P.P.; Wijesooriya, K.; Mohotti, D. Predicting adhesion strength of
micropatterned surfaces using gradient boosting models and explainable artificial intelligence visualizations. Mater. Today
Commun. 2023, 36, 106545. [CrossRef]
440. Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining
useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [CrossRef]
441. Bitar, A.; Rosales, R.; Paulitsch, M. Gradient-based feature-attribution explainability methods for spiking neural networks. Front.
Neurosci. 2023, 17, 1153999. [CrossRef] [PubMed]
442. Kim, H.; Kim, J.S.; Chung, C.K. Identification of cerebral cortices processing acceleration, velocity, and position during directional
reaching movement with deep neural network and explainable AI. Neuroimage 2023, 266, 119783. [CrossRef]
443. Khondker, A.; Kwong, J.C.C.; Rickard, M.; Skreta, M.; Keefe, D.T.; Lorenzo, A.J.; Erdman, L. A machine learning-based approach
for quantitative grading of vesicoureteral reflux from voiding cystourethrograms: Methods and proof of concept. J. Pediatr. Urol.
2022, 18, 78.e1–78.e7. [CrossRef]
444. Lucieri, A.; Dengel, A.; Ahmed, S. Translating theory into practice: Assessing the privacy implications of concept-based
explanations for biomedical AI. FRontiers Bioinform. 2023, 3, 1194993. [CrossRef] [PubMed]
445. Suhail, S.; Iqbal, M.; Hussain, R.; Jurdak, R. ENIGMA: An explainable digital twin security solution for cyber-physical systems.
Comput. Ind. 2023, 151, 103961. [CrossRef]
446. Bacco, L.; Cimino, A.; Dell’Orletta, F.; Merone, M. Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive
Summarization Approach. Electronics 2021, 10, 2195. [CrossRef]
447. Prakash, A.J.; Patro, K.K.; Saunak, S.; Sasmal, P.; Kumari, P.L.; Geetamma, T. A New Approach of Transparent and Explainable
Artificial Intelligence Technique for Patient-Specific ECG Beat Classification. IEEE Sens. Lett. 2023, 7, 5501604. [CrossRef]
448. Alani, M.M.; Awad, A.I. PAIRED: An Explainable Lightweight Android Malware Detection System. IEEE Access 2022, 10,
73214–73228. [CrossRef]
449. Maloca, P.M.; Mueller, P.L.; Lee, A.Y.; Tufail, A.; Balaskas, K.; Niklaus, S.; Kaiser, P.; Suter, S.; Zarranz-Ventura, J.; Egan, C.;
et al. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial
intelligence. Commun. Biol. 2021, 4, 170. [CrossRef] [PubMed]
450. Ahn, I.; Gwon, H.; Kang, H.; Kim, Y.; Seo, H.; Choi, H.; Cho, H.N.; Kim, M.; Jun, T.J.; Kim, Y.H. Machine Learning-Based Hospital
Discharge Prediction for Patients with Cardiovascular Diseases: Development and Usability Study. JMIR Med. Inform. 2021,
9, e32662. [CrossRef]
451. Hammer, J.; Schirrmeister, R.T.; Hartmann, K.; Marusic, P.; Schulze-Bonhage, A.; Ball, T. Interpretable functional specialization
emerges in deep convolutional networks trained on brain signals. J. Neural Eng. 2022, 19, 036006. [CrossRef]
452. Ikushima, H.; Usui, K. Identification of age-dependent features of human bronchi using explainable artificial intelligence. ERJ
Open Res. 2023, 9. [CrossRef]
453. Kalir, A.A.; Lo, S.K.; Goldberg, G.; Zingerman-Koladko, I.; Ohana, A.; Revah, Y.; Chimol, T.B.; Honig, G. Leveraging Machine
Learning for Capacity and Cost on a Complex Toolset: A Case Study. IEEE Trans. Semicond. Manuf. 2023, 36, 611–618. [CrossRef]
454. Shin, H.; Noh, G.; Choi, B.M. Photoplethysmogram based vascular aging assessment using the deep convolutional neural
network. Sci. Rep. 2022, 12, 11377. [CrossRef] [PubMed]
455. Chandra, H.; Pawar, P.M.; Elakkiya, R.; Tamizharasan, P.S.; Muthalagu, R.; Panthakkan, A. Explainable AI for Soil Fertility
Prediction. IEEE Access 2023, 11, 97866–97878. [CrossRef]
456. Blix, K.; Ruescas, A.B.; Johnson, J.E.; Camps-Valls, G. Learning Relevant Features of Optical Water Types. IEEE Geosci. Remote
Sens. Lett. 2022, 19, 1502105. [CrossRef]
457. Topp, S.N.; Barclay, J.; Diaz, J.; Sun, A.Y.; Jia, X.; Lu, D.; Sadler, J.M.; Appling, A.P. Stream Temperature Prediction in a Shifting
Environment: Explaining the Influence of Deep Learning Architecture. Water Resour. Res. 2023, 59, e2022WR033880. [CrossRef]
458. Till, T.; Tschauner, S.; Singer, G.; Lichtenegger, K.; Till, H. Development and optimization of AI algorithms for wrist fracture
detection in children using a freely available dataset. Front. Pediatr. 2023, 11, 1291804. [CrossRef] [PubMed]
459. Aswad, F.M.; Kareem, A.N.; Khudhur, A.M.; Khalaf, B.A.; Mostafa, S.A. Tree-based machine learning algorithms in the Internet
of Things environment for multivariate flood status prediction. J. Intell. Syst. 2022, 31, 1–14. [CrossRef]
460. Ghosh, I.; Alfaro-Cortes, E.; Gamez, M.; Garcia-Rubio, N. Modeling hydro, nuclear, and renewable electricity generation in India:
An atom search optimization-based EEMD-DBSCAN framework and explainable AI. Heliyon 2024, 10, e23434. [CrossRef]
461. Mohanrajan, S.N.; Loganathan, A. Novel Vision Transformer-Based Bi-LSTM Model for LU/LC Prediction-Javadi Hills, India.
Appl. Sci. 2022, 12, 6387. [CrossRef]
462. Zhang, L.; Bibi, F.; Hussain, I.; Sultan, M.; Arshad, A.; Hasnain, S.; Alarifi, I.M.; Alamir, M.A.; Sajjad, U. Evaluating the
Stress-Strain Relationship of the Additively Manufactured Lattice Structures. Micromachines 2023, 14, 75. [CrossRef] [PubMed]
463. Wang, H.; Doumard, E.; Soule-Dupuy, C.; Kemoun, P.; Aligon, J.; Monsarrat, P. Explanations as a New Metric for Feature Selection:
A Systematic Approach. IEEE J. Biomed. Health Inform. 2023, 27, 4131–4142. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 108 of 111

464. Pierrard, R.; Poli, J.P.; Hudelot, C. Spatial relation learning for explainable image classification and annotation in critical
applications. Artif. Intell. 2021, 292, 103434. [CrossRef]
465. Praetorius, J.P.; Walluks, K.; Svensson, C.M.; Arnold, D.; Figge, M.T. IMFSegNet: Cost-effective and objective quantification of
intramuscular fat in histological sections by deep learning. Comput. Struct. Biotechnol. J. 2023, 21, 3696–3704. [CrossRef] [PubMed]
466. Pan, S.; Hoque, S.; Deravi, F. An Attention-Guided Framework for Explainable Biometric Presentation Attack Detection. Sensors
2022, 22, 3365. [CrossRef] [PubMed]
467. Wang, Y.; Huang, M.; Deng, H.; Li, W.; Wu, Z.; Tang, Y.; Liu, G. Identification of vital chemical information via visualization of
graph neural networks. Briefings Bioinform. 2023, 24, bbac577. [CrossRef] [PubMed]
468. Naser, M.Z. CLEMSON: An Automated Machine-Learning Virtual Assistant for Accelerated, Simulation-Free, Transparent,
Reduced-Order, and Inference-Based Reconstruction of Fire Response of Structural Members. J. Struct. Eng. 2022, 148, 04022120.
[CrossRef]
469. Karamanou, A.; Brimos, P.; Kalampokis, E.; Tarabanis, K. Exploring the Quality of Dynamic Open Government Data Using
Statistical and Machine Learning Methods. Sensors 2022, 22, 9684. [CrossRef]
470. Kim, T.; Kwon, S.; Kwon, Y. Prediction of Wave Transmission Characteristics of Low-Crested Structures with Comprehensive
Analysis of Machine Learning. Sensors 2021, 21, 8192. [CrossRef]
471. Gong, H.; Wang, M.; Zhang, H.; Elahe, M.F.; Jin, M. An Explainable AI Approach for the Rapid Diagnosis of COVID-19 Using
Ensemble Learning Algorithms. Front. Public Health 2022, 10, 874455. [CrossRef] [PubMed]
472. Burzynski, D. Useful energy prediction model of a Lithium-ion cell operating on various duty cycles. Eksploat. -Niezawodn.-Maint.
Reliab. 2022, 24, 317–329. [CrossRef]
473. Kim, D.; Ho, C.H.; Park, I.; Kim, J.; Chang, L.S.; Choi, M.H. Untangling the contribution of input parameters to an artificial
intelligence PM2.5 forecast model using the layer-wise relevance propagation method. Atmos. Environ. 2022, 276, 119034.
[CrossRef]
474. Galiger, G.; Bodo, Z. Explainable patch-level histopathology tissue type detection with bag-of-local-features models and data
augmentation. ACTA Univ. Sapientiae Inform. 2023, 15, 60–80. [CrossRef]
475. Naeem, H.; Dong, S.; Falana, O.J.; Ullah, F. Development of a deep stacked ensemble with process based volatile memory
forensics for platform independent malware detection and classification. Expert Syst. Appl. 2023, 223, 119952. [CrossRef]
476. Uddin, M.Z.; Soylu, A. Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-
based neural structured learning. Sci. Rep. 2021, 11, 16455. [CrossRef] [PubMed]
477. Sinha, A.; Das, D. XAI-LCS: Explainable AI-Based Fault Diagnosis of Low-Cost Sensors. IEEE Sens. Lett. 2023, 7, 6009304.
[CrossRef]
478. Jacinto, M.V.G.; Neto, A.D.D.; de Castro, D.L.; Bezerra, F.H.R. Karstified zone interpretation using deep learning algorithms:
Convolutional neural networks applications and model interpretability with explainable AI. Comput. Geosci. 2023, 171, 105281.
[CrossRef]
479. Jakubowski, J.; Stanisz, P.; Bobek, S.; Nalepa, G.J. Anomaly Detection in Asset Degradation Process Using Variational Autoencoder
and Explanations. Sensors 2022, 22, 291. [CrossRef]
480. Guo, C.; Zhao, Z.; Ren, J.; Wang, S.; Liu, Y.; Chen, X. Causal explaining guided domain generalization for rotating machinery
intelligent fault diagnosis. Expert Syst. Appl. 2024, 243, 122806. [CrossRef]
481. Shi, X.; Keenan, T.D.L.; Chen, Q.; De Silva, T.; Thavikulwat, A.T.; Broadhead, G.; Bhandari, S.; Cukras, C.; Chew, E.Y.; Lu, Z.
Improving Interpretability in Machine Diagnosis Detection of Geographic Atrophy in OCT Scans. Ophthalmol. Sci. 2021, 1, 100038.
[CrossRef]
482. Panos, B.; Kleint, L.; Zbinden, J. Identifying preflare spectral features using explainable artificial intelligence. Astron. Astrophys.
2023, 671, A73. [CrossRef]
483. Fang, H.; Shao, Y.; Xie, C.; Tian, B.; Shen, C.; Zhu, Y.; Guo, Y.; Yang, Y.; Chen, G.; Zhang, M. A New Approach to Spatial
Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence. Sustainability 2023, 15,
3094. [CrossRef]
484. Karami, H.; Derakhshani, A.; Ghasemigol, M.; Fereidouni, M.; Miri-Moghaddam, E.; Baradaran, B.; Tabrizi, N.J.; Najafi, S.;
Solimando, A.G.; Marsh, L.M.; et al. Weighted Gene Co-Expression Network Analysis Combined with Machine Learning
Validation to Identify Key Modules and Hub Genes Associated with SARS-CoV-2 Infection. J. Clin. Med. 2021, 10, 3567. [CrossRef]
485. Baek, M.; Kim, S.B. Failure Detection and Primary Cause Identification of Multivariate Time Series Data in Semiconductor
Equipment. IEEE Access 2023, 11, 54363–54372. [CrossRef]
486. Nguyen, P.X.; Tran, T.H.; Pham, N.B.; Do, D.N.; Yairi, T. Human Language Explanation for a Decision Making Agent via
Automated Rationale Generation. IEEE Access 2022, 10, 110727–110741. [CrossRef]
487. Shahriar, S.M.; Bhuiyan, E.A.; Nahiduzzaman, M.; Ahsan, M.; Haider, J. State of Charge Estimation for Electric Vehicle Battery
Management Systems Using the Hybrid Recurrent Learning Approach with Explainable Artificial Intelligence. Energies 2022,
15, 8003. [CrossRef]
488. Kim, D.; Handayani, M.P.; Lee, S.; Lee, J. Feature Attribution Analysis to Quantify the Impact of Oceanographic and Maneuver-
ability Factors on Vessel Shaft Power Using Explainable Tree-Based Model. Sensors 2023, 23, 1072. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 109 of 111

489. Lemanska-Perek, A.; Krzyzanowska-Golab, D.; Kobylinska, K.; Biecek, P.; Skalec, T.; Tyszko, M.; Gozdzik, W.; Adamik, B.
Explainable Artificial Intelligence Helps in Understanding the Effect of Fibronectin on Survival of Sepsis. Cells 2022, 11, 2433.
[CrossRef] [PubMed]
490. Minutti-Martinez, C.; Escalante-Ramirez, B.; Olveres-Montiel, J. PumaMedNet-CXR: An Explainable Generative Artificial
Intelligence for the Analysis and Classification of Chest X-Ray Images. Comput. Y Sist. 2023, 27, 909–920. [CrossRef]
491. Kim, T.; Moon, N.H.; Goh, T.S.; Jung, I.D. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via
explainable artificial intelligence. Sci. Rep. 2023, 13, 10415. [CrossRef] [PubMed]
492. Humer, C.; Heberle, H.; Montanari, F.; Wolf, T.; Huber, F.; Henderson, R.; Heinrich, J.; Streit, M. ChemInformatics Model Explorer
(CIME): Exploratory analysis of chemical model explanations. J. Cheminform. 2022, 14, 21. [CrossRef]
493. Zhang, K.; Zhang, J.; Xu, P.; Gao, T.; Gao, W. A multi-hierarchical interpretable method for DRL-based dispatching control in
power systems. Int. J. Electr. Power Energy Syst. 2023, 152, 109240. [CrossRef]
494. Yang, J.; Yue, Z.; Yuan, Y. Noise-Aware Sparse Gaussian Processes and Application to Reliable Industrial Machinery Health
Monitoring. IEEE Trans. Ind. Inform.S 2023, 19, 5995–6005. [CrossRef]
495. Cheng, F.; Liu, D.; Du, F.; Lin, Y.; Zytek, A.; Li, H.; Qu, H.; Veeramachaneni, K. VBridge: Connecting the Dots between Features
and Data to Explain Healthcare Models. IEEE Trans. Vis. Comput. Graph. 2022, 28, 378–388. [CrossRef]
496. Laqua, A.; Schnee, J.; Pletinckx, J.; Meywerk, M. Exploring User Experience in Sustainable Transport with Explainable AI Methods
Applied to E-Bikes. Appl. Sci. 2023, 13, 1277. [CrossRef]
497. Sanderson, J.; Mao, H.; Abdullah, M.A.M.; Al-Nima, R.R.O.; Woo, W.L. Optimal Fusion of Multispectral Optical and SAR Images
for Flood Inundation Mapping through Explainable Deep Learning. Information 2023, 14, 660. [CrossRef]
498. Abe, S.; Tago, S.; Yokoyama, K.; Ogawa, M.; Takei, T.; Imoto, S.; Fuji, M. Explainable AI for Estimating Pathogenicity of Genetic
Variants Using Large-Scale Knowledge Graphs. Cancers 2023, 15, 1118. [CrossRef] [PubMed]
499. Kerz, E.; Zanwar, S.; Qiao, Y.; Wiechmann, D. Toward explainable AI (XAI) for mental health detection based on language
behavior. Front. Psychiatry 2023, 14, 1219479. [CrossRef]
500. Kim, T.; Jeon, M.; Lee, C.; Kim, J.; Ko, G.; Kim, J.Y.; Youn, C.H. Federated Onboard-Ground Station Computing with Weakly
Supervised Cascading Pyramid Attention Network for Satellite Image Analysis. IEEE Access 2022, 10, 117315–117333. [CrossRef]
501. Thrun, M.C.; Ultsch, A.; Breuer, L. Explainable AI Framework for Multivariate Hydrochemical Time Series. Mach. Learn. Knowl.
Extr. 2021, 3, 170–204. [CrossRef]
502. Beni, T.; Nava, L.; Gigli, G.; Frodella, W.; Catani, F.; Casagli, N.; Gallego, J.I.; Margottini, C.; Spizzichino, D. Classification of
rock slope cavernous weathering on UAV photogrammetric point clouds: The example of Hegra (UNESCO World Heritage Site,
Kingdom of Saudi Arabia). Eng. Geol. 2023, 325, 107286. [CrossRef]
503. Zhou, R.; Zhang, Y. Predicting and explaining karst spring dissolved oxygen using interpretable deep learning approach. Hydrol.
Process. 2023, 37, e14948. [CrossRef]
504. Barros, J.; Cunha, F.; Martins, C.; Pedrosa, P.; Cortez, P. Predicting Weighing Deviations in the Dispatch Workflow Process: A
Case Study in a Cement Industry. IEEE Access 2023, 11, 8119–8135. [CrossRef]
505. Kayadibi, I.; Guraksin, G.E. An Explainable Fully Dense Fusion Neural Network with Deep Support Vector Machine for Retinal
Disease Determination. Int. J. Comput. Intell. Syst. 2023, 16, 28. [CrossRef]
506. Qamar, T.; Bawany, N.Z. Understanding the black-box: Towards interpretable and reliable deep learning models. Peerj Comput.
Sci. 2023, 9, e1629. [CrossRef] [PubMed]
507. Crespi, M.; Ferigo, A.; Custode, L.L.; Iacca, G. A population-based approach for multi-agent interpretable reinforcement learning.
Appl. Soft Comput. 2023, 147, 110758. [CrossRef]
508. Sabrina, F.; Sohail, S.; Farid, F.; Jahan, S.; Ahamed, F.; Gordon, S. An Interpretable Artificial Intelligence Based Smart Agriculture
System. CMC-Comput. Mater. Contin. 2022, 72, 3777–3797. [CrossRef]
509. Wu, J.; Wang, Z.; Dong, J.; Cui, X.; Tao, S.; Chen, X. Robust Runoff Prediction with Explainable Artificial Intelligence and
Meteorological Variables from Deep Learning Ensemble Model. Water Resour. Res. 2023, 59, e2023WR035676. [CrossRef]
510. Nakamura, K.; Uchino, E.; Sato, N.; Araki, A.; Terayama, K.; Kojima, R.; Murashita, K.; Itoh, K.; Mikami, T.; Tamada, Y.; et al.
Individual health-disease phase diagrams for disease prevention based on machine learning. J. Biomed. Inform. 2023, 144, 104448.
[CrossRef]
511. Oh, S.; Park, Y.; Cho, K.J.; Kim, S.J. Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation.
Diagnostics 2021, 11, 510. [CrossRef]
512. Borujeni, S.M.; Arras, L.; Srinivasan, V.; Samek, W. Explainable sequence-to-sequence GRU neural network for pollution
forecasting. Sci. Rep. 2023, 13, 9940. [CrossRef]
513. Alharbi, A.; Petrunin, I.; Panagiotakopoulos, D. Assuring Safe and Efficient Operation of UAV Using Explainable Machine
Learning. Drones 2023, 7, 327. [CrossRef]
514. Sheu, R.K.; Pardeshi, M.S.; Pai, K.C.; Chen, L.C.; Wu, C.L.; Chen, W.C. Interpretable Classification of Pneumonia Infection Using
eXplainable AI (XAI-ICP). IEEE Access 2023, 11, 28896–28919. [CrossRef]
515. Aslam, N.; Khan, I.U.; Aljishi, R.F.; Alnamer, Z.M.; Alzawad, Z.M.; Almomen, F.A.; Alramadan, F.A. Explainable Computational
Intelligence Model for Antepartum Fetal Monitoring to Predict the Risk of IUGR. Electronics 2022, 11, 593. [CrossRef]
516. Peng, P.; Zhang, Y.; Wang, H.; Zhang, H. Towards robust and understandable fault detection and diagnosis using denoising
sparse autoencoder and smooth integrated gradients. Isa Trans. 2022, 125, 371–383. [CrossRef] [PubMed]
Appl. Sci. 2024, 14, 8884 110 of 111

517. Na Pattalung, T.; Ingviya, T.; Chaichulee, S. Feature Explanations in Recurrent Neural Networks for Predicting Risk of Mortality
in Intensive Care Patients. J. Pers. Med. 2021, 11, 934. [CrossRef] [PubMed]
518. Oliveira, F.R.D.S.; Neto, F.B.D.L. Method to Produce More Reasonable Candidate Solutions with Explanations in Intelligent
Decision Support Systems. IEEE Access 2023, 11, 20861–20876. [CrossRef]
519. Burgueno, A.M.; Aldana-Martin, J.F.; Vazquez-Pendon, M.; Barba-Gonzalez, C.; Jimenez Gomez, Y.; Garcia Millan, V.; Navas-
Delgado, I. Scalable approach for high-resolution land cover: A case study in the Mediterranean Basin. J. Big Data 2023, 10, 91.
[CrossRef]
520. Horst, F.; Slijepcevic, D.; Simak, M.; Horsak, B.; Schoellhorn, W.I.; Zeppelzauer, M. Modeling biological individuality using
machine learning: A study on human gait. Comput. Struct. Biotechnol. J. 2023, 21, 3414–3423. [CrossRef]
521. Napoles, G.; Hoitsma, F.; Knoben, A.; Jastrzebska, A.; Espinosa, M.L. Prolog-based agnostic explanation module for structured
pattern classification. Inf. Sci. 2023, 622, 1196–1227. [CrossRef]
522. Ni, L.; Wang, D.; Singh, V.P.; Wu, J.; Chen, X.; Tao, Y.; Zhu, X.; Jiang, J.; Zeng, X. Monthly precipitation prediction at regional scale
using deep convolutional neural networks. Hydrol. Process. 2023, 37, e14954. [CrossRef]
523. Amiri-Zarandi, M.; Karimipour, H.; Dara, R.A. A federated and explainable approach for insider threat detection in IoT. Internet
Things 2023, 24, 100965. [CrossRef]
524. Niu, Y.; Gu, L.; Zhao, Y.; Lu, F. Explainable Diabetic Retinopathy Detection and Retinal Image Generation. IEEE J. Biomed. Health
Inform. 2022, 26, 44–55. [CrossRef]
525. Kliangkhlao, M.; Limsiroratana, S.; Sahoh, B. The Design and Development of a Causal Bayesian Networks Model for the
Explanation of Agricultural Supply Chains. IEEE Access 2022, 10, 86813–86823. [CrossRef]
526. Dissanayake, T.; Fernando, T.; Denman, S.; Sridharan, S.; Ghaemmaghami, H.; Fookes, C. A Robust Interpretable Deep Learning
Classifier for Heart Anomaly Detection without Segmentation. IEEE J. Biomed. Health Inform. 2021, 25, 2162–2171. [CrossRef]
[PubMed]
527. Dastile, X.; Celik, T. Making Deep Learning-Based Predictions for Credit Scoring Explainable. IEEE Access 2021, 9, 50426–50440.
[CrossRef]
528. Khan, M.A.; Azhar, M.; Ibrar, K.; Alqahtani, A.; Alsubai, S.; Binbusayyis, A.; Kim, Y.J.; Chang, B. COVID-19 Classification from
Chest X-Ray Images: A Framework of Deep Explainable Artificial Intelligence. Comput. Intell. Neurosci. 2022, 2022, 4254631.
[CrossRef]
529. Moon, S.; Lee, H. JDSNMF: Joint Deep Semi-Non-Negative Matrix Factorization for Learning Integrative Representation of
Molecular Signals in Alzheimer’s Disease. J. Pers. Med. 2021, 11, 686. [CrossRef]
530. Kiefer, S.; Hoffmann, M.; Schmid, U. Semantic Interactive Learning for Text Classification: A Constructive Approach for
Contextual Interactions. Mach. Learn. Knowl. Extr. 2022, 4, 994–1010. [CrossRef]
531. Franco, D.; Oneto, L.; Navarin, N.; Anguita, D. Toward Learning Trustworthily from Data Combining Privacy, Fairness, and
Explainability: An Application to Face Recognition. Entropy 2021, 23, 1047. [CrossRef]
532. Montiel-Vazquez, E.C.; Uresti, J.A.R.; Loyola-Gonzalez, O. An Explainable Artificial Intelligence Approach for Detecting Empathy
in Textual Communication. Appl. Sci. 2022, 12, 9407. [CrossRef]
533. Mollas, I.; Bassiliades, N.; Tsoumakas, G. Truthful meta-explanations for local interpretability of machine learning models. Appl.
Intell. 2023, 53, 26927–26948. [CrossRef]
534. Juang, C.F.; Chang, C.W.; Hung, T.H. Hand Palm Tracking in Monocular Images by Fuzzy Rule-Based Fusion of Explainable
Fuzzy Features with Robot Imitation Application. IEEE Trans. Fuzzy Syst. 2021, 29, 3594–3606. [CrossRef]
535. Cicek, I.B.; Colak, C.; Yologlu, S.; Kucukakcali, Z.; Ozhan, O.; Taslidere, E.; Danis, N.; Koc, A.; Parlakpinar, H.; Akbulut, S.
Nephrotoxicity Development of a Clinical Decision Support System Based on Tree-Based Machine Learning Methods to Detect
Diagnostic Biomarkers from Genomic Data in Methotrexate-Induced Rats. Appl. Sci. 2023, 13, 8870. [CrossRef]
536. Jung, D.H.; Kim, H.Y.; Won, J.H.; Park, S.H. Development of a classification model for Cynanchum wilfordii and Cynanchum
auriculatum using convolutional neural network and local interpretable model-agnostic explanation technology. Front. Plant Sci.
2023, 14, 1169709. [CrossRef] [PubMed]
537. Rawal, A.; Kidchob, C.; Ou, J.; Yogurtcu, O.N.; Yang, H.; Sauna, Z.E. A machine learning approach for identifying variables
associated with risk of developing neutralizing antidrug antibodies to factor VIII. Heliyon 2023, 9, e16331. [CrossRef]
538. Yeung, C.; Ho, D.; Pham, B.; Fountaine, K.T.; Zhang, Z.; Levy, K.; Raman, A.P. Enhancing Adjoint Optimization-Based Photonic
Inverse Designwith Explainable Machine Learning. Acs Photonics 2022, 9, 1577–1585. [CrossRef]
539. Naeem, H.; Alshammari, B.M.; Ullah, F. Explainable Artificial Intelligence-Based IoT Device Malware Detection Mechanism
Using Image Visualization and Fine-Tuned CNN-Based Transfer Learning Model. Comput. Intell. Neurosci. 2022, 2022, 7671967.
[CrossRef]
540. Mey, O.; Neufeld, D. Explainable AI Algorithms for Vibration Data-Based Fault Detection: Use Case-Adadpted Methods and
Critical Evaluation. Sensors 2022, 22, 9037. [CrossRef]
541. Martinez, G.S.; Perez-Rueda, E.; Kumar, A.; Sarkar, S.; Silva, S.d.A.e. Explainable artificial intelligence as a reliable annotator of
archaeal promoter regions. Sci. Rep. 2023, 13, 1763. [CrossRef]
542. Nkengue, M.J.; Zeng, X.; Koehl, L.; Tao, X. X-RCRNet: An explainable deep-learning network for COVID-19 detection using ECG
beat signals. Biomed. Signal Process. Control. 2024, 87, 105424. [CrossRef]
Appl. Sci. 2024, 14, 8884 111 of 111

543. Behrens, G.; Beucler, T.; Gentine, P.; Iglesias-Suarez, F.; Pritchard, M.; Eyring, V. Non-Linear Dimensionality Reduction with
a Variational Encoder Decoder to Understand Convective Processes in Climate Models. J. Adv. Model. Earth Syst. 2022,
14, e2022MS003130. [CrossRef]
544. Fatahi, R.; Nasiri, H.; Dadfar, E.; Chelgani, S.C. Modeling of energy consumption factors for an industrial cement vertical roller
mill by SHAP-XGBoost: A “conscious lab” approach. Sci. Rep. 2022, 12, 7543. [CrossRef] [PubMed]
545. De Groote, W.; Kikken, E.; Hostens, E.; Van Hoecke, S.; Crevecoeur, G. Neural Network Augmented Physics Models for Systems
with Partially Unknown Dynamics: Application to Slider-Crank Mechanism. IEEE-ASME Trans. Mechatronics 2022, 27, 103–114.
[CrossRef]
546. Takalo-Mattila, J.; Heiskanen, M.; Kyllonen, V.; Maatta, L.; Bogdanoff, A. Explainable Steel Quality Prediction System Based on
Gradient Boosting Decision Trees. IEEE Access 2022, 10, 68099–68110. [CrossRef]
547. Jang, J.; Jeong, W.; Kim, S.; Lee, B.; Lee, M.; Moon, J. RAID: Robust and Interpretable Daily Peak Load Forecasting via Multiple
Deep Neural Networks and Shapley Values. Sustainability 2023, 15, 6951. [CrossRef]
548. Aishwarya, N.; Veena, M.B.; Ullas, Y.L.; Rajasekaran, R.T. “SWASTHA-SHWASA”: Utility of Deep Learning for Diagnosis of
Common Lung Pathologies from Chest X-rays. Int. J. Early Child. Spec. Educ. 2022, 14, 1895–1905. [CrossRef]
549. Kaczmarek-Majer, K.; Casalino, G.; Castellano, G.; Dominiak, M.; Hryniewicz, O.; Kaminska, O.; Vessio, G.; Diaz-Rodriguez, N.
PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries. Inf. Sci. 2022, 614, 374–399.
[CrossRef]
550. Bae, H. Evaluation of Malware Classification Models for Heterogeneous Data. Sensors 2024, 24, 288. [CrossRef]
551. Gerussi, A.; Verda, D.; Cappadona, C.; Cristoferi, L.; Bernasconi, D.P.; Bottaro, S.; Carbone, M.; Muselli, M.; Invernizzi, P.; Asselta,
R.; et al. LLM-PBC: Logic Learning Machine-Based Explainable Rules Accurately Stratify the Genetic Risk of Primary Biliary
Cholangitis. J. Pers. Med. 2022, 12, 1587. [CrossRef]
552. Li, B.M.; Castorina, V.L.; Hernandez, M.D.C.V.; Clancy, U.; Wiseman, S.J.; Sakka, E.; Storkey, A.J.; Garcia, D.J.; Cheng, Y.; Doubal,
F.; et al. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols. Front. Comput.
Neurosci. 2022, 16, 887633. [CrossRef] [PubMed]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy