0% found this document useful (0 votes)
158 views36 pages

Neuro-Symbolic Artificial Intelligence: A Survey: Review

Please make ppt of this paper cover all the important points.

Uploaded by

Usman Rajpoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
158 views36 pages

Neuro-Symbolic Artificial Intelligence: A Survey: Review

Please make ppt of this paper cover all the important points.

Uploaded by

Usman Rajpoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Neural Computing and Applications (2024) 36:12809–12844

https://doi.org/10.1007/s00521-024-09960-z (0123456789().,-volV)(0123456789().,-volV)

REVIEW

Neuro-symbolic artificial intelligence: a survey


Bikram Pratim Bhuyan1,2 • Amar Ramdane-Cherif 1 •
Ravi Tomar 3 •
T. P. Singh4

Received: 8 May 2023 / Accepted: 3 May 2024 / Published online: 6 June 2024
Ó The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024

Abstract
The goal of the growing discipline of neuro-symbolic artificial intelligence (AI) is to develop AI systems with more
human-like reasoning capabilities by combining symbolic reasoning with connectionist learning. We survey the literature
on neuro-symbolic AI during the last two decades, including books, monographs, review papers, contribution pieces,
opinion articles, foundational workshops/talks, and related PhD theses. Four main features of neuro-symbolic AI are
discussed, including representation, learning, reasoning, and decision-making. Finally, we discuss the many applications of
neuro-symbolic AI, including question answering, robotics, computer vision, healthcare, and more. Scalability, explain-
ability, and ethical considerations are also covered, as well as other difficulties and limits of neuro-symbolic AI. This study
summarizes the current state of the art in neuro-symbolic artificial intelligence.

Keywords Neuro-symbolic artificial intelligence  Machine learning  Knowledge representation and reasoning 
Spatial-temporal data  Neural networks  Artificial intelligence

1 Introduction improved sectors like image recognition and language


translation [5]. Generative adversarial networks (GANs)
There have been several breakthroughs and innovations in and variational autoencoders (VAEs) may produce new
the areas of artificial intelligence (AI) and deep learning data, images, and sounds [6]. Music production and design
(connectionist artificial intelligence) during the last decade might leverage these models. Edge computing, another
[1]. The widespread use of AI and deep learning as cutting- decade-old breakthrough, allows AI model installation on
edge technologies has been a significant recent develop- low-resource devices. Thus, AI and deep learning models
ment. Several industries, including healthcare, banking, may be applied on edge, closer to the data source, which is
transportation, agriculture, and arts, have profited from beneficial in constructing Internet of Things (IoT) devices
recent artificial intelligence and deep learning develop- [7].
ments [2–4]. Yet, connectionist AI is not without its caveats. One
New technologies have advanced deep learning models drawback is that training models properly usually require a
in computer vision and natural language processing. Con- lot of data (typically involving highly unstructured, per-
volutional neural networks (CNNs) and transformers have ceptual data). These AI models may also lack the trans-
parency and explainability of other forms of AI due to the
complexity involved in understanding how they arrive at
& Bikram Pratim Bhuyan their predictions or choices [8].
bikram-pratim.bhuyan@universite-paris-saclay.fr Symbolic AI, commonly known as ‘‘good old-fashioned
1
LISV Laboratory, University of Paris Saclay, 10-12 Avenue AI’’, emerged as the foundation of AI research during the
of Europe, 78140 Velizy, France mid-twentieth century with notable figures such as Allen
2
School of Computer Science, University of Petroleum and Newell and Herbert A. Simon [9–11]. Referred to as rule-
Energy Studies, Bidholi, Dehradun, Uttarakhand 248006, based or expert systems, they were designed and imple-
India mented with a predefined set of explicit rules and logical
3
Persistent Systems, Pune, Mumbai 411016, India reasoning mechanisms to address and resolve various
4
School of Computer Science Engineering and Technology, problems. Ontologies were conceived as a means of rep-
Bennett University, Greater Noida 201306, India resenting and sharing knowledge [12]. Although symbolic

123
12810 Neural Computing and Applications (2024) 36:12809–12844

AI demonstrated proficiency in problem domains charac- predefined categories or rules. While symbolic AI excels in
terized by explicit rules and clear boundaries, they tasks that require clear, logical reasoning and inter-
encountered difficulties when confronted with incomplete pretability, its rule-based nature can limit its efficiency in
information [13]. Thus, the efficiency of these systems is scenarios where learning from data or scaling to large
hugely dependent on the completeness of the knowledge. problem spaces is essential.
The drawbacks of both the fields individually in terms of However, it’s crucial to contextualize these efficiency
‘Explainability’, ‘Efficiency’, and ‘Generalization’ could considerations within the specific domains and tasks to
be seen through Fig. 1. The efficiency of connectionist AI which each AI approach is applied. While connectionist AI
is typically considered high due to its ability to process vast may show higher efficiency in data-driven, pattern recog-
amounts of data and learn complex patterns through neural nition tasks, symbolic AI can be more efficient in domains
networks. This efficiency stems from the processing where clear reasoning, interpretability, and adherence to
capabilities of neural networks, which can handle and learn explicit knowledge or rules are paramount. This distinction
from high-dimensional data, making them particularly underscores the complementary nature of these approaches,
adept at tasks like image and speech recognition, where highlighting the potential of neuro-symbolic AI to leverage
they can directly learn from raw inputs to outputs. the strengths of both to achieve higher overall efficiency
On the other hand, the efficiency of symbolic AI is often across a broader range of tasks.
viewed as lower, particularly in the context of processing The roots of neuro-symbolic (NeSy) AI may be traced
large datasets or handling perceptual tasks. Symbolic AI all the way back to the 1950s and 1960s when the field of
operates on explicit rules and logic, which can be com- AI was getting its start [14]. In the past, artificial intelli-
putationally intensive and less flexible when dealing with gence studies focused on creating rule- and symbol-based
nuanced or ambiguous data that does not fit neatly into problem-solving machines. In the 1980s, however,

Fig. 1 The drawbacks of both the fields individually in terms of ‘Explainability’, ‘Efficiency’, and ‘Generalization’, when the fields merge
together to form neuro-symbolic artificial intelligence, all three characteristics are high

123
Neural Computing and Applications (2024) 36:12809–12844 12811

scientists started to see the method’s flaws. For example, statements. Knowledge may also be represented in a
natural language processing and vision were shown to be human-understandable form, for as via the use of words
areas where symbolic AI systems faltered. Researchers and symbols to stand in for real-world entities and abstract
began implementing neuroscientific principles into AI concepts.
systems to address these shortcomings. In the early twenty- Recent research on neural-symbolic integration, which
first century, scientists started looking at ways to combine seeks to combine the capabilities of symbolic AI with
the best features of the two methods. They came up with a neural networks to produce more powerful and adapt-
new branch of AI, neuro-symbolic AI, which combines able intelligent systems, is surveyed in the articles as
symbolic reasoning and representation with neural net- shown in Table 1, and we base our classification method
works. It has been used in disparate fields such as health- based on this with the objective of harnessing the com-
care, robotics, and natural language processing. One of the plementing capabilities of the two paradigms [168]. The
most exciting directions in artificial intelligence research criteria for classification are taken from the Kautz’s talk
today is neuro-symbolic AI, which aims to create intelli- [169], which is even regarded as the turning point of the
gent systems that can learn and reason like humans. The field [33].
growing interest in the field could be seen through the All of the major developments over the last two decades
amount of literature published, as shown in Fig. 2. The are summarized in this survey article. It delves into the
literature contains books, monographs, thesis [15–23], numerous aspects that have led to the hybridization of
review papers [20, 24–33], contributory articles [34–95], connectionist AI and symbolic AI. Its applications in many
commentary articles [25, 39, 93, 96–143], and foundational fields are also examined. The challenges are also being
workshops/talks [144–167]. It’s worth noting that neuro- considered. Figure 3 depicts a conceptual map of the arti-
synthetic AI is a hot topic in both academia and industry cle. The organization of the survey is shown in Fig. 4.
because of its immense potential for artificial general
intelligence.
Neuro-symbolic AI is a kind of AI that takes cues from 2 Background and related work
the way the human brain processes information while also
relying on symbolic logic to solve issues. The study of the 2.1 Neuro-symbolic properties
brain and its functions serves as inspiration for the ‘‘neuro’’
component of neuro-symbolic AI [33]. The ‘‘neuro’’ We delve into the core components that define neuro-
component of this AI makes use of neural networks to learn symbolic AI, encompassing representation, learning, rea-
from data and enhance its grasp of the environment, much soning, decision-making, knowledge, and logic. This
like the way human brains process information and learn exploration provides insight into how neuro-symbolic AI
from experience. The ‘‘symbolic’’ component of neuro- seeks to amalgamate the strengths of symbolic and neural
symbolic AI uses symbolic representations and logical approaches to overcome their limitations.
reasoning to accomplish its goals. This suggests that the AI
can think logically and grasp notions like ‘‘if-then’’

Fig. 2 Peer reviewed papers in


the field of neuro-symbolic AI
with keywords, ‘neuro-
symbolic’, ‘neural-symbolic’,
‘neuro symbolic’, ‘neural
symbolic’ and ‘neurosymbolic’

123
12812 Neural Computing and Applications (2024) 36:12809–12844

Table 1 Review papers with the discussion upon the domain, properties, type of neural architecture and neuro-symbolic types represented by NS
Properties
Paper Year Domain Representation Learning Reasoning Decision making Logic Neural type NS

Corchado et al. [24] 2002 Oceanography – – – – – U –


Hatzilygeroudis et al. [25] 2004 Expert Systems – – – – – – –
Öztürk et al. [26] 2014 CBR – – – – – – –
Besold et al. [27] 2017 General – – – – U U –
Garnelo et al. [28] 2019 General U – – – – – –
Garcez et al. [29] 2019 General – U U – U – –
De et al. [30] 2020 General – – – – U U –
Sarker et al. [31] 2021 General – U U - U – U
Hitzler et al. [20] 2022 General – – – – – – –
Wang et al. [32] 2022 General – U U U U – –
Garcez et al. [33] 2023 General U – U – - – U
Our survey 2024 General U U U U U U U

2.1.1 Representations relationships and structures within data, making them ideal
for tasks that involve relational reasoning, knowledge
When discussing symbolic AI, ‘‘localist representations’’ graphs, and structured prediction. [173] surveys around this
refer to using isolated symbols to stand in for abstract ideas integration for encoding both entity attributes and the
or concrete objects [170]. Expert systems and rule-based relationships between entities in a way that is amenable to
systems are two examples of symbolic AI that extensively neural network processing.
use localist representations [171]. As each sign represents a Differentiable programming extends the capabilities of
distinct idea that humans can readily grasp, they benefit neural networks by making them more flexible and capable
from being interpretable and transparent. of incorporating symbolic computation within the learning
In contrast to localist representations, distributed repre- process. [174, 175] uses this approach to enable the inte-
sentations [170] have gained traction in recent years, par- gration of symbolic reasoning directly into the neural net-
ticularly in the context of deep learning. Distinct work’s architecture, allowing for the optimization of
dimensions of a vector of real-valued integers in distributed symbolic operations alongside standard neural network
representations represent different features or aspects of a parameters, facilitating a tighter integration of symbolic
topic. This paves the way for more versatile and potent and sub-symbolic AI components.
representations that encapsulate subtle but significant data Variable grounding refers to the process of linking
linkages and patterns. The difference can be seen in Fig. 5. abstract symbols or concepts to concrete instances in data.
Localist and distributed representation has their own In the context of neuro-symbolic AI, [176, 177] involves
benefits and drawbacks, as shown in Table 2. the identification and association of symbolic variables
Attention systems, graph neural networks, differentiable with relevant features or patterns learned by the neural
programming, variable grounding, symbol manipulation, network, enabling the system to reason about abstract
and foundation model representation techniques make concepts in a grounded, data-driven context.
neuro-symbolic AI integration unique in the field. Symbol manipulation in neuro-symbolic systems
Attention mechanisms in neuro-symbolic AI improve involves the use of operations on symbols that represent
the model’s focus on relevant parts of the input data or abstract concepts, akin to traditional symbolic AI.
internal representations. This is particularly used in tasks [178, 179] integrated these operations within a neural
requiring sequential data processing, like natural language framework. Neuro-symbolic AI systems can perform
understanding by [91, 172], where the model needs to symbolic reasoning, such as logical deduction and infer-
focus on relevant parts of the input sequence to make ence, while also benefiting from the adaptive learning
decisions or predictions. capabilities of neural networks.
Graph neural networks (GNNs) are pivotal in repre- Finally, leveraging foundation models for representation
senting and processing data in graph form, which is can enhance performance in neuro-symbolic tasks, reduce
inherently symbolic. GNNs can capture the complex data labeling, and minimize manual engineering, as

123
Neural Computing and Applications (2024) 36:12809–12844 12813

Commentary
Type 2 [22, Articles [90,
Type 3 Foundational
31, 56–69] 104, 154,
[10–13, Workshops
157–204] Contributory
32–36, 70] Type 1 [8, / talks
[205–228] Articles [21,
9, 14, 15, 21, 45, 65,
30, 54, 55, 76, 99–156]
Type 4 229, 230]
[23–28,
44, 71–76]
Review
Papers
Contribution
Type [85, 89–98]
Type 5 [16– Type
18, 29, 37–
43, 77, 78,
231–233] Books/
Mono-
graphs
/ Thesis
Type 6 [46, 81–88]
[19, 79, 80,
234–236]

Neuro-Symbolic AI
Robotics
[8–13]

Logic

Question
Answering
[14–21]
Domain Properties
Decision
Making

Medical
applications
[22–29] Other
Sciences
[54–80] Reasoning
Computer Programming
Vision Representation
and Opti-
[30–43] mization Learning
[21, 44–53]

Fig. 3 A conceptual map of the survey, depicting the wide range of neuro-symbolic AI implementations, their respective type of integration,
contribution kinds, and properties

demonstrated by the introduction of architectures like is systematically derived from existing rules and examples.
NeSyGPT [180]. However, this method’s reliance on extensive manual
curation of knowledge bases and datasets is a notable lim-
2.1.2 Learning itation [183].
In contrast, connectionist AI, particularly through deep
Neuro-symbolic AI introduces a paradigm shift in how learning, excels at learning representations from raw,
machines learn, blending the deductive, rule-based learning unstructured data [184]. It employs various techniques
of symbolic AI with the inductive, pattern-recognizing (e.g., supervised, unsupervised, and reinforcement learning
capabilities of neural networks. This hybrid approach [185]) to adjust neural connections, enabling pattern
leverages the strengths of both domains to facilitate a more recognition and decision-making. While powerful, this
comprehensive learning methodology. approach often lacks transparency and interoperability.
Traditional symbolic AI learns through logical deduc- Neuro-symbolic AI (NeSy) aims to transcend these
tion, inducing general rules from specific instances. Tech- limitations by integrating the structured knowledge repre-
niques like decision tree induction [181] and explanation- sentation of symbolic AI with the adaptive learning
based learning [182] exemplify this, where new knowledge mechanisms of neural networks. This integration enables

123
12814 Neural Computing and Applications (2024) 36:12809–12844

Fig. 4 Organization of the article as a flowchart

NeSy systems to: (a) learn from fewer examples by vast datasets but cannot traditionally perform explicit, rule-
leveraging pre-existing symbolic knowledge, thus based reasoning. However, there has been some recent
addressing the data-hungry nature of pure neural approa- work on developing reasoning tasks based on neural net-
ches; (b) enhance interpretability by grounding neural works. For example, some researchers have explored using
network outputs in symbolic representations, making the neural networks to understand natural language and answer
learning process and outcomes more understandable; questions [188]. Other researchers have looked into neural-
(c) facilitate adaptable reasoning that combines the symbolic integration, in which neural networks are used to
robustness of neural pattern recognition with the precision learn representations of complex data, which are fed into
of symbolic logic; and (d) incorporate feedback loops symbolic reasoning systems to make logical inferences
where symbolic reasoning can guide neural learning and [189]. Even with all these efforts, making neural network-
vice versa, enabling dynamic adaptation to new informa- based approaches to reasoning tasks work well is still
tion or tasks. The comparison is shown in Table 3. tough, especially when explicit rules or logic are needed.
These challenges include how hard it is to encode symbolic
2.1.3 Reasoning information in a distributed representation, how fragile
neural networks are when dealing with new inputs, and
Reasoning, a fundamental aspect of intelligence, has been how little they can do abstract reasoning or figure out what
approached differently across the AI spectrum. The trade- information is missing.
off between learning and reasoning in symbolic AI and Another important discussion is on combinatorial and
connectionist AI can be shown in Table 4. Symbolic AI, common-sense reasoning [33]. Common-sense reasoning is
with its roots in formal logic and knowledge representation, a type of approximate reasoning that involves making
traditionally employs deductive, inductive, and abductive assumptions or inferences based on general knowledge and
reasoning [186]. These methods allow for deriving con- experience rather than on explicit rules or algorithms.
clusions from known premises, generalizations from Problems in mathematics, computer science, and engi-
specific instances, and formulating plausible explanations neering are typically solved with the use of combinatorial
from observations [186, 187]. While powerful in structured reasoning methods, including counting principles, permu-
environments, symbolic reasoning struggles with ambigu- tations, and combinations. The emergence of neuro-sym-
ity and the inherent uncertainty of real-world data. bolic AI represents a paradigm shift, aiming to meld the
structured reasoning capabilities of symbolic AI with the
In contrast, connectionist models, particularly neural adaptive learning process of neural networks [136]. The
networks, excel in pattern recognition and inference from various types of reasoning used are shown in Fig. 6.

123
Neural Computing and Applications (2024) 36:12809–12844 12815

Fig. 5 Difference between localist and distributed representations

Table 2 Comparison of localist and distributed representations and integration in neuro-symbolic AI


Aspect Localist representation Distributed representation

Definition Represents concepts with dedicated units or nodes in the Represents concepts across many units, with each unit
network, where each unit represents a single concept or participating in the representation of multiple concepts,
category allowing for more nuanced representations
Benefits High interpretability and transparency Greater capacity for generalization
Easier manipulation of individual concepts Efficient use of network capacity
Simplifies mapping of symbolic knowledge Facilitates learning of complex patterns
Drawbacks Limited scalability with the number of concepts Reduced interpretability of individual units
Less efficient in capturing complex patterns Integration of explicit symbolic knowledge can be challenging
Neuro- Neuro-symbolic AI leverages both approaches, utilizing localist representations for symbolic components and distributed
symbolic AI methods for neural processing, enabling efficient integration of symbolic reasoning with neural learning
integration

Under NeSy, CTLK (Temporal-Epistemic Reasoning) networks can be employed to interpret and defend trans-
[39, 40] exemplifies the application of deductive reasoning lations of non-classical logics, including temporal logic.
in neuro-symbolic systems, showcasing how neural CIL2 P [36, 37] (Connectionist Inductive Learning and

123
12816 Neural Computing and Applications (2024) 36:12809–12844

Table 3 Comparison of learning paradigms in neuro-symbolic AI


Learning Characteristics Neuro-symbolic integration
paradigm

Symbolic Involves logical deduction and induction to generate rules NeSy integrates symbolic rules with neural learning, allowing
learning from data. Highly interpretable but requires extensive for the derivation of symbolic knowledge from neural
knowledge engineering representations, enhancing interpretability and leveraging
pre-existing knowledge
Connectionist Utilizes neural networks to learn patterns from large NeSy harnesses neural networks for pattern recognition and
learning datasets. Excels in generalization but lacks transparency generalization, while grounding the learned patterns in
symbolic representations for improved transparency and
reasoning
Hybrid Aims to combine the strengths of symbolic and connectionist NeSy embodies true hybrid learning by deeply integrating
learning approaches, often using separate components for each symbolic and neural processes within a unified framework,
enabling dynamic, bidirectional interaction between
symbolic reasoning and neural learning
Reinforcement Involves learning through interaction with an environment NeSy applies reinforcement learning principles to both
learning and receiving feedback in the form of rewards symbolic and neural components, enabling the system to
refine its strategies and knowledge through experience
Unsupervised Focuses on discovering hidden patterns or structures in In NeSy, unsupervised learning techniques can be used to
learning unlabeled data uncover latent symbolic structures within data, which can
then be explicitly represented and manipulated

Table 4 Trade-off between learning and reasoning in symbolic AI 2.1.4 Decisions


and neural networks
Quantification Symbolic AI Neural network Neuro-symbolic AI advances decision-making by inte-
grating the rapid, intuitive processing akin to Kahneman’s
Reasoning Learning Reasoning Learning
System 1 with the deliberate, logical reasoning of System 2
Universal (8) Easy Hard [190]. Table 5 summarizes the two types of decision-
Existential (9) Hard Easy making in ‘‘Thinking, Fast and Slow’’ and their relation-
ship to neuro-symbolic AI.
Neuro-symbolic models incorporate neural network
components that mimic System 1 thinking by processing
Logic Programming) serves as a prime example of induc- sensory data rapidly to produce intuitive responses. These
tive reasoning in neuro-symbolic AI, where a neural net- components are adept at recognizing patterns and making
work is trained using propositional logic and then used to quick predictions, similar to the fast and subconscious
derive logical programs from the learned representations. decision-making observed in humans. For instance, neural
MicroPsi [58, 59], CORGI (COmmonsense Reasoning by learning within NeSy can be trained on large datasets to
Instruction) and COMET (COMmonsense Transformers) swiftly identify patterns, akin to how humans rely on
[81, 82] stand out as a significant contribution toward heuristics and past experiences for immediate decision-
modeling common-sense reasoning within a neuro-sym- making.
bolic framework, focusing on cognitive architecture and Symbolic components within NeSy frameworks reflect
autonomous motivation, which are essential for common- System 2 thinking, employing logical rules and knowledge
sense understanding and decision-making. DeepProbLog representation for reasoned analysis and decision-making.
[75–77] integrates probabilistic logic programming with This aspect allows NeSy systems to handle complex,
neural networks, offering a powerful approach to combi- structured problems that require careful deliberation and
natorial reasoning where the system can reason over logic. Techniques such as rule-based inference and sym-
complex, structured data and learn from uncertain infor- bolic manipulation enable NeSy models to perform tasks
mation, making it relevant for tasks that require combina- that necessitate a deep understanding of relationships and
torial reasoning capabilities. concepts, mirroring humans’ slow, conscious decision-
making process.
The logical neural networks (LNNs) developed by IBM
Research [86] embody aspects of System 2 thinking by

123
Neural Computing and Applications (2024) 36:12809–12844 12817

Fig. 6 Different types of reasoning which are not mutually exclusive and can often be used in combination with one another

Table 5 A table summarizing the two types of decision-making in ‘‘Thinking, Fast and Slow’’ and their relationship to neuro-symbolic AI
Type of Description Relationship to neuro-symbolic AI
decision-
making

System 1 Fast, automatic, subconscious decision- Similar to neural learning, where the system is trained on large amounts of data
making based on heuristics and intuition to quickly recognize patterns and make predictions.
System 2 Slow, deliberate, conscious decision-making Similar to symbolic learning, where the system is provided with explicit logical
based on reasoning, analysis, and logic rules and knowledge representation to reason about concepts and
relationships.

supporting first-order logic, allowing for the representation The Neuro-Symbolic Question Answering (NSQA)
of more complex kinds of knowledge in a way that’s system [191] is another example where IBM Research has
understandable and can represent uncertainty. LNNs applied NeSy for knowledge-based question answering,
improve predictive accuracy by representing the strengths requiring advanced reasoning such as multi-hop, quantita-
of relationships between logical clauses via neural weights. tive, geographic, and temporal reasoning. The NSQA
They are tolerant of incomplete knowledge, unlike many approach translates natural language questions into an
AI approaches that make closed-world assumptions. This abstract form that captures the conceptual meaning,
feature enables LNNs to operate under more realistic, allowing reasoning over existing knowledge to answer
open-world assumptions, accommodating incomplete complex questions. This method provides interpretability,
knowledge robustly.

123
12818 Neural Computing and Applications (2024) 36:12809–12844

generalizability, and robustness, which are critical in Logic is a foundational pillar for knowledge represen-
enterprise natural language processing settings. tation in NeSy, providing a formal structure for encoding
Implementations like Scallop [192], which supports domain-specific rules and relationships. By mapping logi-
differentiable logical and relational reasoning, and Deep- cal constructs to neural representations, NeSy systems can
ProbLog [75–77], which combines neural networks with leverage the robustness of neural learning while adhering
probabilistic reasoning, further illustrate the versatility and to the precision of logical reasoning. This dual approach
depth of NeSy approaches in bridging the gap between not only enhances the system’s interpretability but also its
neural and symbolic architectures. These implementations adaptability to complex reasoning tasks [31, 195].
showcase how NeSy can leverage large-scale learning and Knowledge graphs represent a pivotal component of
symbol manipulation for robust intelligence. NeSy, offering a structured and interconnected framework
for representing complex knowledge bases. By encapsu-
2.1.5 Knowledge and logic lating entities, concepts, and their relationships in a graph
structure, knowledge graphs enable NeSy systems to per-
Neuro-symbolic AI synergizes the structured expressive- form sophisticated reasoning and inference, drawing on the
ness of logic with the adaptive learning capabilities of rich semantic connections encoded within the graph
neural networks, fostering systems that excel in reasoning [196, 197].
and knowledge representation. Figure 7 gives a pictorial
view of such a framework’s various kinds of logic. 2.2 Neuro-symbolic: best of both worlds
NeSy architectures frequently employ propositional
logic for its simplicity in representing binary relationships Neuro-symbolic AI can build more powerful reasoning and
and decision processes. First-order logic (FOL), with its learning systems by combining the strengths of deep
ability to quantify individuals, extends this capacity, learning-based methods and symbolic reasoning tech-
allowing for more intricate representations of real-world niques. However, the key research questions (included in
scenarios. Integrating FOL in NeSy facilitates reasoning Wikipedia) asked [198] were:
about entities and their relations, enhancing the system’s
A. What is the best way to integrate neural and symbolic
ability to generalize from specific instances to broader
architectures?
concepts [20, 193].
B. How should symbolic structures be represented within
Higher-order logic (HOL) further expands the expres-
neural networks and extracted from them?
sive power of NeSy systems by enabling quantification
C. How should common-sense knowledge be learned and
over predicates and functions. This allows for the modeling
reasoned about?
of complex abstractions and relationships, which is pivotal
D. How can abstract knowledge that is hard to encode
for tasks requiring deep semantic understanding. However,
logically be handled?
the increased expressiveness of HOL comes with chal-
lenges in decidability and computational efficiency, We now try to find the solutions to these questions in the
necessitating innovative solutions within NeSy frameworks major algorithms/paradigms/language/frameworks devel-
to harness its potential effectively [29, 194]. oped for neuro-symbolic artificial intelligence integration

Fig. 7 Various disciplines of logic: a. Symbolic expressions—delving quantifiers and variables that can represent objects in a domain. d.
into the language of mathematics and logic, symbolic expressions use Higher-order logic—builds on first-order logic by allowing functions
variables and operations to represent complex ideas succinctly. For and predicates to be inputs to other functions and predicates,
example, ‘a?b?2cosA’ and ‘1?5/(6*10)?15’ demonstrate how facilitating more complex expressions of ideas. e. Knowledge
mathematical symbols and functions can encapsulate calculations or graphs—representing complex networks of real-world information,
relationships. b. Propositional logic—this discipline focuses on knowledge graphs connect entities (such as individuals, places, and
forming and analyzing statements that can be either true or false. c. objects) through edges that represent their interrelations
First-order logic—extends propositional logic by incorporating

123
Neural Computing and Applications (2024) 36:12809–12844 12819

during the last two decades. The summary of these 2.3 Neuro-symbolic types
frameworks is in Table 6. From Table 6, we can now cover
some discussions based on the four questions posed. 2.3.1 Type 1: symbolic neuro-symbolic
The integration of neural and symbolic architectures has
been approached in various innovative ways. Early meth- In the domain of type 1 neuro-symbolic AI, the interplay
ods like KBANN [34] and Penalty Logic [35] laid the between neural networks and symbolic reasoning forms the
groundwork by mapping propositional logic and penalty cornerstone of representation, inference, and learning pro-
systems onto neural networks, respectively. As the field cesses. Here, neural networks are harnessed for their
evolved, more sophisticated frameworks like LTN powerful representational learning capabilities, enabling
[62, 66–68] and Tensor Networks [62] emerged, offering the extraction of nuanced patterns and features from
richer representations and interactions within neural net- complex data. This is particularly evident in natural lan-
works through tensors and differentiable logical languages. guage processing, where neural network-based vector
More recent advancements like DeepLogic [92] and HRI embeddings, such as those developed by [199, 200],
[93] have focused on simultaneous learning of perception transform input symbols into rich, continuous vector
and reasoning, and hierarchical rule induction, showcasing spaces. These embeddings capture semantic and syntactic
the continuous evolution toward more seamless and effi- relationships inherent in the data, facilitating a broad
cient integration methods. spectrum of neural network-driven tasks like classification,
The representation and extraction of symbolic structures prediction, and sequence generation.
within neural networks have seen significant advance- Conversely, symbolic reasoning within type 1 systems is
ments. Early models like NSL [38] and CTLK [39, 40] deployed to imbue these neural representations with
introduced context-free languages and the capability to structured, logical frameworks. This symbolic layer is
interpret non-classical logics, respectively. Over time, pivotal for encoding knowledge, performing deductive
models like NTP [65] and DeepProbLog [75–77] have reasoning, and ensuring the interpretability of the AI sys-
enhanced the representation of complex logical structures tem’s operations. It leverages symbols and formal logic to
and probabilistic logic programming within neural net- articulate rules and constraints, thereby guiding the deci-
works. These developments highlight a trend toward more sion-making processes in a transparent and explainable
expressive and interpretable neuro-symbolic systems cap- manner.
able of embedding and reasoning with intricate symbolic The fusion of neural networks and symbolic reasoning
information. in type 1 neuro-symbolic AI endeavors to marry the
Learning and reasoning about common-sense knowl- adaptive, data-driven insights of neural networks with the
edge have been central to neuro-symbolic AI’s evolution. clarity and rigor of symbolic logic. This hybrid approach
Initial approaches like CIL2 P [36] and SATyrus [41] not only enhances the system’s ability to process and
focused on inductive learning and constraint processing. interpret complex, real-world data but also ensures that its
Later, models like NLM [78] and NSPS [62] demonstrated operations remain grounded in logical principles that are
scalable learning from small to larger tasks and program comprehensible to human operators.
synthesis, respectively, indicating a growing capability in Figure 8 illustrates this synergistic relationship between
common-sense reasoning. The introduction of models like neural representation and symbolic logic, highlighting how
CORGI [89] and NSFR [90], which engage in conversa- each contributes to the system’s overall functionality.
tional reasoning and forward-chaining reasoning, respec- Sequential methodologies within this category, such as
tively, showcases the field’s progression toward more language translation or graph categorization, exemplify the
dynamic and interactive common-sense reasoning systems. application of neural networks for symbolic processing.
The handling of abstract knowledge has evolved from However, as outlined in Table 7, despite their advance-
simpler logic mapping and penalty systems in models like ments, these integrations highlight the ongoing challenges
KBANN [34] and Penalty Logic [35] to more complex in achieving the full potential of neuro-symbolic
hierarchical and adaptive systems seen in HRI [93] and integration.
DeepLogic [92]. These recent developments demonstrate a
significant advancement in neuro-symbolic AI’s ability to 2.3.2 Type 2: symbolic [neuro]
process, reason, and learn from abstract concepts, moving
closer to human-like reasoning capabilities. Systems of type 2 neuro-symbolic AI employ neural net-
works as subroutines inside a broader symbolic problem
solver; these systems are hybrid but are predominantly
symbolic. Loose coupling between the symbolic and neural

123
12820 Neural Computing and Applications (2024) 36:12809–12844

Table 6 Major algorithms/paradigms/language/frameworks developed for neuro-symbolic artificial intelligence integration during the last two
decades
Authors/Work Year Question A: Best way to Question B: Question C: Learning and Question D: Handling
(Ref) integrate Representation and reasoning about common- abstract knowledge
extraction of symbolic sense knowledge
structures

KBANN [34] 1994 Hybrid learning system Propositional logic Utilizes past knowledge Demonstrates superior
mapping domain theories encoded within neural for generalization, aiding generalization in
onto neural networks architectures common-sense reasoning molecular biology,
indicating effective
handling of abstract
concepts
Penalty Logic 1995 Penalty Logic as an Embeds symbolic Addresses nonmonotonic Penalty system allows for
[35] alternative connectionist structures as penalties reasoning and approximation and
paradigm for integration within neural networks inconsistent beliefs, reasoning about abstract
relevant to common- knowledge
sense knowledge
CIL2 P [36] 1999 CIL2 P model based on Utilizes a translational Inductive learning from Logic programming aspect
feed-forward ANN and technique for embedding examples and past aids in handling abstract
logic programming propositional logic knowledge supports knowledge that is
common-sense reasoning logically hard to encode
NSL [38] 2002 Integrates neural and Employs weighted-sum Facilitates common-sense Addresses abstract
symbolic systems via a nonlinear thresholded reasoning through knowledge using BNF
context-free language elements for symbolic inductive learning and formalism within a
embedded in neural representation formal language neural framework
networks structure
CTLK [39, 40] 2003 Demonstrates artificial Neural networks are The ability to reason about Addresses the challenge of
neural networks’ employed to solve new information suggests encoding and processing
capability to interpret problems like the a pathway for learning abstract knowledge
and apply non-classical muddy-children puzzle, and applying common- through the application
logics, including indicating a method for sense knowledge within of temporal-epistemic
propositional temporal embedding and neural frameworks reasoning within neural
logic, showcasing an extracting complex networks
advanced integration logical structures
method
SATyrus [41] 2005 SATyrus showcases a The architecture employs The model’s ability to Its approach to expressing
neuro-symbolic approach energy functions to solve complex problems problems as energy
for constraint processing represent symbolic like the traveling functions offers a unique
by translating problems constraints within neural salesman problem hints way to handle abstract
into energy functions, networks, facilitating at its capacity for knowledge that is
indicating a novel their extraction through common-sense reasoning typically challenging to
integration method global minima solutions and problem-solving encode logically
NSBL [42] 2005 Neuro-symbolic language Action-selection and Adaptive behavior for Modeling complex
for robotics behavior inference mechanisms common-sense reasoning behaviors and navigation
modeling for symbolic in robotics in robotics
representation
Sathasivam et al. 2010 Introduces the Pseudo Demonstrates an effective Enhances the network’s Compares with Hebb Rule
[44–46] inverse learning rule for method for representing capability for inductive and Direct learning rule,
enhancing Hopfield logical functions within learning, relevant for showcasing efficiency in
neural network logic neural networks common-sense reasoning handling complex logical
programming constructs
Velik et al. [47] 2010 Introduces a neuro- Proposes neuro-symbolic Explores perceptual Addresses the binding
symbolic network coding to represent and learning processes, problem in perception,
bridging neurological process multimodal suggesting a framework providing insights into
and symbolic levels, sensory information, for common-sense handling abstract
offering a unified facilitating the extraction knowledge acquisition knowledge through
approach to integration of symbolic structures and reasoning based on neuro-symbolic
from neural data sensory inputs interactions

123
Neural Computing and Applications (2024) 36:12809–12844 12821

Table 6 (continued)
Authors/Work Year Question A: Best way to Question B: Question C: Learning and Question D: Handling
(Ref) integrate Representation and reasoning about common- abstract knowledge
extraction of symbolic sense knowledge
structures

Komendantskaya 2010 Introduced neural Utilized symbol Explored recursive Demonstrated the neural
et al. [48] networks capable of recognizers and recurrent computing for enhancing network’s ability to
performing induction, connections for common-sense reasoning handle complex
presenting a novel embedding and in neural networks dependencies,
approach to neuro- processing symbolic contributing to the
symbolic computation structures management of abstract
knowledge
Neurule [49] 2011 Employs neurules derived Neurules enable efficient Enhances reasoning with Facilitates adaptive
from training examples updates and interactive case-based integration, reasoning with diverse
or symbolic rule bases, inference, illustrating indicating an approach knowledge sources,
showcasing a method for advanced symbolic for incorporating addressing the challenge
dynamic integration structure handling within common-sense of managing abstract
neural frameworks knowledge knowledge
SCTL [55] 2011 Utilizes sequences and Employs a nonlinear The learning from The adaptation of temporal
counter-examples to recurrent network model sequences and system logic rules and model
integrate temporal logic to represent and extract properties facilitates checking into the neural
rules into neural temporal logic structures, reasoning about network aids in
networks, offering a enhancing symbolic common-sense managing abstract
novel approach to neuro- representation within knowledge, particularly knowledge related to
symbolic integration neural frameworks in temporal domains time and system
behaviors
NTN [56] 2013 Introduces a method for Employs tensors for rich Utilizes knowledge base Demonstrates high
entity vectors to interact representation and reasoning for predicting accuracy in classifying
through tensors, interaction of entity new entity relationships, unseen relationships,
enhancing the integration vectors, enabling the indicating a capability showcasing the model’s
of knowledge bases with extraction of complex for common-sense ability to manage
neural networks relational information knowledge inference abstract knowledge
Riveret et al. [57] 2015 Integrates probabilistic Enables alternative The probabilistic setup Demonstrates the handling
abstract argumentation labeling within neural suggests a method for of complex argument
with Boltzmann networks, facilitating the common-sense reasoning structures, contributing
machines, offering a representation and through argumentation to the abstraction of
unique approach to extraction of knowledge within neural
neuro-symbolic argumentative structures networks
reasoning
MicroPsi [58] 2015 Explores neuro-symbolic Models complex human- Utilizes polycyclic Applies parameters and
cognitive architecture like behaviors and motivation and social modulators to capture
with a focus on emotions, providing a demands to simulate individual variance and
autonomous motivation, framework for common-sense reasoning personality traits,
bridging cognitive representing and and social interactions offering insights into
processes with symbolic extracting symbolic abstract knowledge
reasoning structures related to representation
affective states
Confidence Rules 2016 Introduces a novel method Enhances the Demonstrates the
[63] for embedding representation of deep incorporation of
quantitative ideas in networks through historical data into
neural networks using confidence-based training, suggesting a
confidence criteria layerwise extraction potential for abstract
knowledge handling
Hu et al. [64] 2016 Provides a framework for Utilizes iterative The technique’s ability to
enhancing neural distillation to embed infuse structured logical
networks with first-order logic rules into network information into neural
logic, offering a novel weights, improving networks suggests a
integration approach symbolic structure potential for handling
representation abstract knowledge

123
12822 Neural Computing and Applications (2024) 36:12809–12844

Table 6 (continued)
Authors/Work Year Question A: Best way to Question B: Question C: Learning and Question D: Handling
(Ref) integrate Representation and reasoning about common- abstract knowledge
extraction of symbolic sense knowledge
structures

NTP [65] 2016 Utilizes differentiable Enables the representation The application of domain Facilitates the handling of
backward chaining to and learning of complex knowledge and canonical abstract knowledge by
integrate logical logical structures through rules suggests a method learning logical linkages
reasoning within neural replacement for common-sense from minimal data
networks representations reasoning
LTN [66] 2016 Presents LTN as a Utilizes Real Logic, a The framework’s ability to LTN’s integration of first-
framework combining differentiable logical handle rich data and order logic and neural
neural networks with language, for abstract world computation offers a
first-order logic for representing and knowledge suggests novel approach to
querying, learning, and processing data and potential for common- managing abstract
reasoning knowledge within neural sense reasoning knowledge in AI tasks
networks applications
Tensor networks 2016 Introduces a Neuro- Features two novel neural Demonstrates program Leverages context-free
[62] Symbolic Program modules: a cross- synthesis capability, grammar rules for
Synthesis method, correlation I/O network potentially applicable in constructing parse trees,
enabling autonomous and R3NN for program learning common-sense highlighting a novel
code generation for synthesis reasoning patterns approach to abstract
replicating input–output knowledge
pairs representation
Wang et al. [69] 2017 Introduces DGCC, Employs a multi- Proposes ‘‘hierarchical
blending human granularity approach to structuralism’’ as a new
cognition methods with represent and process paradigm, potentially
machine learning for information, enhancing advancing the handling
cognitive computing symbolic representation of abstract and complex
in neural networks knowledge
Tran et al. [70] 2017 Proposes a method to Enhances RBMs to handle Offers a less complex
represent propositional symbolic structures framework for
formulas in Restricted through a new integrating symbolic
Boltzmann Machines representation approach knowledge, suggesting
(RBMs), simplifying potential in handling
logical implications and abstract knowledge
Horn clauses
representation
TPRN [72] 2018 Introduces TPRN for Embeds discrete symbol Demonstrates learning of Enables deep learning
interpretable question structures within neural syntax/semantics through systems to create
answering using networks to represent task performance, representations encoding
grammatical concepts and process linguistic aligning with natural abstract grammatical
without prior linguistic information language acquisition concepts, bridging the
knowledge theories gap between continuous
numerical operations and
discrete conceptual
categories
dILP [73] 2018 Introduces dILP Embeds logical structures Facilitates learning from Supports data efficiency
framework for robust within neural networks to ambiguous data, and generalization,
logic programming enhance interpretability suggesting an approach addressing the challenge
against noisy data, and reasoning for common-sense of encoding abstract
extending beyond capabilities knowledge acquisition knowledge that is hard to
traditional ILP encode logically
capabilities
DeepProbLog 2018 Proposes DeepProbLog, Combines symbolic and Aids in learning and Showcases the integration
[75] integrating neural sub-symbolic reasoning with of logical reasoning and
networks with representations, enabling probabilistic models, probabilistic modeling,
probabilistic logic complex logical contributing to the offering new
programming for reasoning within neural understanding of perspectives on handling
enhanced reasoning architectures common-sense abstract knowledge
knowledge

123
Neural Computing and Applications (2024) 36:12809–12844 12823

Table 6 (continued)
Authors/Work Year Question A: Best way to Question B: Question C: Learning and Question D: Handling
(Ref) integrate Representation and reasoning about common- abstract knowledge
extraction of symbolic sense knowledge
structures

NLM [78] 2019 Introduces NLM for Processes objects, Demonstrates scalability Illustrates how neural
inductive reasoning and attributes, and relations from small-scale tasks to networks can
learning, employing using logic programming larger applications, approximate complex
logic programming within neural indicating potential for functions, enhancing the
alongside neural frameworks common-sense handling of abstract
networks knowledge learning knowledge
SGM [79] 2019 Combines deep generative Enhances generative Offers a new perspective
models with neuro- models by incorporating on integrating
symbolic programs, global structural programmatic
introducing a expressions frameworks with neural
programmatic framework models, potentially
for structure expression advancing abstract
knowledge
representation
KENN [80] 2019 Develops KENN, adding Integrates logical Facilitates the
logical constraints to restrictions within neural incorporation of
neural network networks to refine learnable logical
predictions through a predictions constraints, contributing
Knowledge Enhancer to the discussion on
layer abstract knowledge
encoding
COMET [81] 2019 Adapts language models to Enhances language models Demonstrates the Addresses the integration
generate new common- with common-sense generation of accurate of dynamic, contextually
sense knowledge, reasoning capabilities common-sense relevant common-sense
validated against knowledge knowledge into language
ATOMIC and models
ConceptNet databases
PLANS [83] 2020 Applies hybrid systems to Integrates neural and rule- Reduces human oversight Innovates in combining
decode decision-making based reasoning for in understanding neural and symbolic
logic from visual decision-making logic decision-making components efficiently
narratives, introducing analysis processes in complex for decision-making
adaptive filtering for scenarios analysis
neurally inferred
specifications
r-FOL [84] 2020 Evaluates VQA models’ Incorporates first-order Facilitates the separation
reasoning using a logic for interpretability of reasoning from
differentiable first-order in reasoning processes perception in VQA
logic framework, models, enhancing
independent of interpretability and
perception analytical capabilities
MWS [85] 2020 Explores neuro-symbolic Introduces the MWS Utilizes MWS to learn Focuses on explainability
generative models using algorithm to enhance models in complex and compositional
neural networks for both program induction within domains, suggesting an structure in generative
inference and symbolic learning processes approach for acquiring modeling, contributing to
data generation, common-sense abstract knowledge
capturing compositional knowledge representation
structures
LNN [86] 2020 Presents LNNs that Enables neural networks to Could facilitate logical Advances the field by
evaluate logical process logical reasoning and common- embedding weighted
equations, integrating predicates and equations, sense knowledge logical systems within
predicate logic within enhancing symbolic application through neural networks,
neural frameworks representation neural computation addressing abstract
reasoning challenges

123
12824 Neural Computing and Applications (2024) 36:12809–12844

Table 6 (continued)
Authors/Work Year Question A: Best way to Question B: Question C: Learning and Question D: Handling
(Ref) integrate Representation and reasoning about common- abstract knowledge
extraction of symbolic sense knowledge
structures

DLM [88] 2021 Proposes DLM for tackling Utilizes predicates as Demonstrates the Introduces a novel method
ILP and RL problems weights, enabling a application in solving for encoding and
using a neural-logic continuous complex problems, processing abstract
architecture representation of first- implying potential for logical knowledge
order logic programs common-sense reasoning through gradient descent,
within neural networks enhancing the neuro-
symbolic AI domain
CORGI [89] 2021 Introduces a Engages in dialogue using Demonstrates the Highlights the practical
conversational approach a common-sense evocation of common- application of neuro-
for common-sense knowledge base, sense knowledge through symbolic models in
reasoning using a neuro- enhancing user human speech, conversational AI,
symbolic theorem prover interaction with AI suggesting advancements contributing to the field
in natural language of common-sense
understanding reasoning
NSFR [90] 2021 Proposes a novel reasoning Transforms raw inputs into Facilitates seamless Enhances the
method using probabilistic ground deduction of new facts interpretability and
differentiable forward- atoms for reasoning, from existing knowledge, flexibility of neuro-
chaining based on first- advancing symbolic aligning with common- symbolic reasoning,
order logic representation in neural sense reasoning pushing the boundaries
networks paradigms of abstract knowledge
handling
autoBOT [91] 2021 Explores autonomous Evolves representations Contributes to the
development of text rather than learning advancement of low-
representations for them, offering a novel resource, explainable AI
explainable and efficient approach to handling models, potentially
AI models symbolic structures impacting the
representation of abstract
knowledge
DeepLogic [92] 2022 Integrates neural Utilizes a tree structure and Optimizes mutual Describes first-order
perception and logical logic operators for supervision signals for logical formulations,
reasoning in a unified sophisticated logical simultaneous learning of enhancing abstract
learning process formulations within perception and reasoning knowledge handling
neural networks
HRI [93] 2022 Solves ILP issues with a Matches meta-rule facts Uses a set of generic meta- Employs controlled noise
hierarchical rule with body predicates rules for common-sense and interpretability-
induction approach, through learned knowledge reasoning regularization for
efficiently integrating embeddings, abstract knowledge
neural and symbolic representing symbolic
methods structures
SenticNet 7 [94] 2022 Utilizes auto-regressive Transforms real language Enhances sentiment Provides a trustworthy and
models and kernel into a proto-language for analysis with explainable framework
methods for generating symbolic processing unsupervised, repeatable, for abstract knowledge
symbolic representations and interpretable models representation
from text
ASL [95] 2023 Combines deep learning Induces logical hypotheses Applies meta-interpretive Reduces inconsistency in
with abductive logical for subconcept learning for common- model outputs,
reasoning for subconcept representation and sense knowledge advancing abstract
learning and reasoning detection in neural acquisition and reasoning knowledge handling
networks through integrated
learning

123
Neural Computing and Applications (2024) 36:12809–12844 12825

Fig. 8 Neuro-symbolic AI process flow in type 1 systems. Symbols symbolic outputs, integrating the adaptability of neural embeddings
are translated into vector representations, processed through neural with the precision of symbolic logic
networks to capture intricate patterns, and then converted back into

components is a hallmark of this integrated model type combine the benefits of neural and symbolic techniques to
(Fig. 9). System types 2 include models, which use a solve difficult problems, as shown in Table 9.
symbolic stack machine to support recursion and sequence
manipulation and a neural network to generate the execu- 2.3.4 Type 4: neuro-symbolic ! neuro
tion trace. A notable instance of this hybrid approach is
AlphaGo [208], which integrates Monte Carlo Tree Search Systems of this fourth kind of integration include symbolic
(MCTS) [209] for problem-solving and a neural network rules and information into the design or training of neural
for heuristic evaluations, thereby showcasing the potential networks (Fig. 11). With the goal of seamlessly integrating
of combining strategic decision-making processes with symbolic domain information into connectionist architec-
neural network-based insights. It’s crucial to clarify that tures, this method has lately acquired traction. They also
while AlphaGo exemplifies the innovative use of neural include tightly coupled but localist neuro-symbolic systems
networks within a decision-making framework, its config- [237–242]. To teach a system in mathematics, for instance,
uration primarily enhances decision strategies and may not one may use tree representations of equations and mean-
fully encapsulate the traditional neural-symbolic integra- ingful mathematical expressions [243]. Symbolic programs
tion aimed at combining deep semantic reasoning with are produced and run by the neural network as completely
neural computation. Another case in point is a rule-based differentiable operations in Visual Question Answering
system that leverages abstract notions recorded by a neural models [84]. Graph neural networks (GNNs) [244] are
perception module as I/O requirements and is introduced being used more recently to include external knowledge
for program synthesis from raw visual observations. The bases with entities and relationships. Though some critics
usefulness of combining the skills of symbolic thinking claim GNNs’ reasoning power is lacking, Kautz classifies
with brain processing for complicated problem-solving such approaches as Type 4. Table 10 shows the properties
tasks is brought to light by type 2 systems. Table 8 shows of some contributions.
the properties of some contributions.
2.3.5 Type 5: neuroSymbolic
2.3.3 Type 3: neuro | symbolic
In order to train a neural network, type 5 neuro-symbolic
Type 3 neuro-symbolic AI systems combine neural and
AI systems include symbolic information as soft restric-
symbolic components to improve both aspects’ perfor-
tions into the loss function (tensors) (Fig. 12). The neural
mance. In this setup, the relationship between the neuro-
network is given the ability to reason with the information
logical and symbolic layers is more cooperative than
thanks to the incorporation of symbolic knowledge into the
strictly functional (Fig. 10). Some program synthesis
network weights. Logic tensor networks (LTNs)
algorithms, for instance, make use of deep learning to
[62, 66–68] are an example of this method; they use fuzzy
produce symbolic programs and rule systems that fulfill
relations on real numbers to represent first-order logic
high-level task specifications; the interaction between the
equations in neural computing, enabling gradient-based
neural and symbolic components aids in the model’s per-
sub-symbolic learning. To cope with approximate rather
formance. To improve decision-making, symbolic planning
than accurate reasoning, LTNs soften Boolean first-order
is also included in RL in neural-symbolic RL. Similarly,
logic as soft fuzzy logic. End-to-end training of networks
NLProlog [188] and DeepProbLog [75–77] employ neural
using symbolic knowledge is made possible by LTNs by
networks to calculate the probabilities of probabilistic facts
including logic rules in the network learning aim. When
and the inference mechanism of ProbLog to compute the
designing classifiers, class hierarchies are used as both the
required loss gradient, all of which are instances of type 3
classification targets and the background knowledge. The
systems. In general, type 3 neuro-symbolic AI systems

123
12826 Neural Computing and Applications (2024) 36:12809–12844

Table 7 Collection of papers with neuro-symbolic type 1 and their properties


Paper Year Domain Properties
Rep. Learn. Reason. Dec. Mak. Logic Neural Typ.

Burattini et al. [201] 2001 Expert Sys. Loc.  Comm.   


Hitzler et al. [202] 2003 Logic Prog. Dist. Ded.    FF NN
Coraggio et al. [203] 2008 Robotics Dist. Ded.    FF NN
Staffa et al. [204] 2011 Robotics Dist. Diff. Evol. [205]    FF NN
Hasoon et al. [206] 2013 Op. Sys. Dist. Ded.  Rule B.  ANN
word2vec [199] 2013 QA Dist. Grad. Desc.    RNN
Glove [200] 2014 QA Dist. Grad. Desc.    RNN
Golovko et al. [207] 2020 Comp. Vis. Dist. Ded.  Rule B.  ANN
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Logic Logic Type, Neural Typ. Neural Type, Ded.
Deductive, Dist. Distributed, Loc. Localist, FF NN Feed Forward Neural Network, ANN Artificial Neural Network, RNN Recurrent Neural
Network, Comm. Common-sense, Rule B. Rule Based, Grad. Desc. Gradient Descent

purpose of objective functions in training is to encourage that mimics the logic of tensor calculus to train neural
consistency between predictions and the existing class networks to carry out symbolic operations. Their capacity
structure. Additional training targets for hierarchical scene for logical thinking, however, remains low. Kautz argues
parsing are compositional relations over semantic hierar- that type 6 techniques should be able to do combinatorial
chies. Table 11 shows the properties of some contributions. reasoning since they are computer models of Kahneman’s
System 1 and System 2, although such a fully fledged
2.3.6 Type 6: neuro[symbolic] system does not exist yet. According to Kautz, no current
proper integration method comes close to matching the
Most experts agree that type 6 neuro-symbolic AI has the quality of a Type 6 system. Nevertheless, Type 6 systems
most promise for bringing together the best features of could significantly advance AI by bringing together sym-
traditional symbolic AI with modern neural-based AI. A bolic reasoning and neural networks. Table 12 shows the
symbolic thinking engine is embedded directly into a properties of some contributions claiming to be in type 6.
neural engine, making this a completely integrated system
(Fig. 13). Type 6 methods include a family of algorithms
3 Applications

The rapid advancement of neuro-symbolic integration in


recent years has paved the way for the emergence of a
plethora of new applications. Here, we showcase several
widely used applications in an effort to spark future inno-
vation across a wider range of use cases.

3.1 Neuro-symbolic AI in robotics

Neuro-symbolic AI is significantly advancing robotics by


enabling robots to perform complex tasks previously
deemed unattainable, leveraging the fusion of neural net-
work adaptability with the structured logic of symbolic AI.
This synergy enhances robots’ capabilities to perceive,
reason, and act in intricate and unpredictable environments.
Fig. 9 Integration framework of type 2 neuro-symbolic AI. The Notable implementations include robots learning new skills
diagram illustrates a neural network acting as an intermediary from human demonstrations, translating these into sym-
between input/output flows and a symbolic AI system. The neural
bolic plans, and reasoning about objects’ physical proper-
components provide insight-driven inputs to the symbolic problem
solver, characterizing the loosely coupled but predominantly sym- ties and their environmental interactions.
bolic nature of these systems

123
Neural Computing and Applications (2024) 36:12809–12844 12827

Table 8 Collection of papers with neuro-symbolic type 2 and their properties


Paper Year Domain Properties Neural Typ.
Rep. Learn. Reason. Dec. Mak. Logic

Neuro-Data-Mine [210] 2000 Medical applications Dist. Unsup.    


Corchado et al. [211] 2001 Oceanography Dist. Sup. Case-B.  Prop. Belief network
Riverola et al. [212] 2002 Oceanography Dist. Sup. Case-B.  Prop. RBF ANN
Neagu et al. [213] 2002 Air Quality Dist. Sup.    Basic ANN
Corchado et al. [214] 2003 Oceanography Dist. Sup. Case-B.   Basic ANN
Fsfrt [215] 2003 Oceanography Dist. Sup. Case-B.  Prop. RBF ANN
Policastro et al. [216] 2003 Mechanics Dist. Sup. Case-B.  Prop. MLP
Fernandez et al. [217] 2004 Biology Dist. Unsup. Case-B.  Fuzzy 
Corchado et al. [218] 2005 Business Dist. Sup. Case-B.   Basic ANN
Prentzas et al. [50, 219] 2008 UCI [220] Dist. Sup. Case-B.   Basic ANN
Borrajo et al. [221] 2008 Business Loc. Sup. Case-B. Rule B. Prop. 
Hatzilygeroudis et al. [222, 223] 2011 Business Loc. Sup. Case-B. Rule B. Prop. 
Bach et al. [224] 2015 Minecraft Dist. Sup.  Rule B. Prop. 
Bologna et al. [225] 2017 Computer Vision Dist. Sup.  Rule B. Prop. Deep MLP
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Logic Logic Type, Neural Typ. Neural Type, Sup.
Supervised, Unsup. Unsupervised, Case-B. Case-Based, Rule B. Rule Based, Prop. Propositional, Basic ANN Basic Artificial Neural Network,
RBF ANN Radial Basis Function Artificial Neural Network, MLP Multilayer Perceptron, Deep MLP Deep Multilayer Perceptron

Staffa et al. [204] explored robotic control by tuning


thresholds within a neuro-symbolic network, demonstrating
enhanced adaptability and decision-making in behavior-
based robotics. The dynamic adjustment of behavior in
response to environmental changes showcases the potential
of neuro-symbolic approaches in improving robotic
autonomy and efficiency.
Coraggio and De Gregorio [229] developed a neuro-
symbolic hybrid method for landmark recognition and
robot localization, improving landmark detection robust-
ness and robot navigation accuracy in complex settings.
This method exemplifies the significant contributions of
neuro-symbolic integration to the field of robotics, partic-
ularly in spatial awareness and adaptability applications.
An innovative approach to active video surveillance was
Fig. 10 Dynamic interplay in type 3 neuro-symbolic AI systems. The
illustration depicts a cyclical interaction where a neural network and a
presented in [230], integrating virtual neural sensors with
symbolic AI system operate in a feedback loop, allowing for both BDI agents for enhanced system intelligence and reactivity.
procedural learning and logical inference. This structure supports This integration yields a highly adaptive surveillance sys-
complex tasks like program synthesis, as seen in systems that interpret tem capable of autonomous operation in dynamic envi-
visual data through neural perception and apply symbolic reasoning
for output generation
ronments, highlighting the benefits of combining neural
networks’ perceptual abilities with symbolic AI’s reason-
Coraggio et al. [203] devised a neuro-symbolic system ing capabilities.
for robot self-localization in minimally sensor-equipped Kraetzschmar et al. [226] utilized neuro-symbolic inte-
environments, utilizing natural environmental features as gration for environmental modeling in mobile robotics,
landmarks for navigation. This approach blends neural enabling dynamic and efficient environment representation
networks’ perceptual strengths with symbolic AI’s logical crucial for navigation and interaction. This approach
reasoning, enabling sophisticated decision-making pro- underscores the importance of combining neural
cesses based on landmark detection and encoding.

123
12828 Neural Computing and Applications (2024) 36:12809–12844

Table 9 Collection of papers with neuro-symbolic type 3 and their properties


Paper Year Domain Properties Neural Typ.
Rep. Learn. Reason. Dec. Mak. Logic

Kraetzschmar et al.[226] 2000 Mobile Robotics Dist. Sup.   Prop. Voronoi


WiSARD [227, 228] 2003 Computer Vision Dist. Sup.   F.O. Basic ANN
Coraggio et al. [229] 2007 Robotics Dist. Sup.   F.O. Basic ANN
De Gregorio et al. [230] 2008 Robotics Dist. Sup. Ded.  F.O. Basic ANN
Qadeer et al. [231] 2009 Home Care Loc. Sup. Ded. Ontology Prop. Basic ANN
Dietrich et al. [232] 2009 Robotics Loc. Sup. Ded. Ontology Prop. Basic ANN
Barbosa et al. [233, 234] 2017 Computer Vision Dist. Sup.   F.O. Basic ANN
Yi et al. [235] 2018 Computer Vision Dist. Sup.   Symbolic CNN
NLProlog [188] 2019 Question Answering Dist. ILP [236]  Rule B. Symbolic MLP
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Neural Typ. Neural Type, Sup. Supervised, Ded.
Deductive, Prop. Propositional, F.O. First Order, ILP Inductive Logic Programming, CNN Convolutional Neural Network, MLP Multilayer
Perceptron

represents a significant advancement in the field. It adopts a


quasi-symbolic approach, utilizing neural networks for
inference and symbolic data for generating logical actions.
This method provides a framework for common-sense
knowledge acquisition and reasoning based on sensory
inputs, thereby offering insights into handling abstract
knowledge through neuro-symbolic interactions.
Furthermore, the development of the Neuro-Symbolic
Dynamic Reasoning (NS-DR) model, tailored for the
CLEVRER video reasoning dataset [280], introduces a
neural dynamics predictor. This learned physics engine is
crucial for accounting for causal relations in dynamic
environments, making it particularly relevant for robotics
applications where understanding and predicting physical
interactions are key.
Fig. 11 Type 4 neuro-symbolic AI system with explicit mapping.
This figure shows a structure where a distinct mapping layer explicitly
These are just a handful of the ways that neuro-symbolic
connects the symbolic AI component with the neural network. This AI is revolutionizing robotics. Several key viewpoints and
setup allows for direct translation of symbolic reasoning into neural limitations emerge that future researchers in the field of
operations and vice versa, facilitating complex tasks that require tight neuro-symbolic AI in robotics can address:
integration of both symbolic and sub-symbolic processes
a. Environmental complexity and dynamic adaptation
While neuro-symbolic systems like those developed by
adaptability with symbolic reasoning in enhancing robots’
Coraggio et al. [203] and Staffa et al. [204] have shown
real-world operational effectiveness.
promise in navigating and making decisions based on
The research [131] conducted by Google Inc., Byte-
environmental features, the adaptability of these systems to
Dance Inc., and Tsinghua University on the neuro-sym-
rapidly changing or highly complex environments remains
bolic Neural Logic Machine (NLM) [78] has demonstrated
a challenge. Future research could focus on enhancing the
state-of-the-art methods for solving general application
robustness and flexibility of neuro-symbolic systems to
tasks like array sorting, critical path finding, and more
better cope with unpredictable changes in the environment.
intricate tasks such as Blocks World. This approach allows
b. Perception and landmark recognition The work by
for the application of generalized rules to achieve target
Coraggio and De Gregorio [229] on landmark recognition
results from randomized layouts, showcasing the potential
for robot localization points to the need for improved
of NeSy in enhancing robotic capabilities.
perceptual accuracy and the ability to distinguish between
Moreover, the Neuro-Symbolic Concept Learner (NS-
similar features in the environment. Enhancing the per-
CL) model, designed for the CLEVR dataset [179],
ceptual capabilities of neuro-symbolic systems, possibly

123
Neural Computing and Applications (2024) 36:12809–12844 12829

Table 10 Collection of papers with neuro-symbolic type 4 and their properties


Paper Year Domain Properties Neural Typ.
Rep. Learn. Reason. Dec. Mak. Logic

NEURULES [237] 2000 Medical applications Loc. LMS  Rule B. Prop. 


INSS [238] 2001 Monk’s Problem [243] Loc. Incr.  Rule B. Prop. Cascade correlation
Garcez et al. [239] 2001 Molecular Biology Loc. Ded.  Rule B. Prop. Basic NN
Prentzas et al. [240] 2002 Intelligent Tutoring Loc. Ded.  Rule B. Prop. Basic NN
Salgado et al. [241] 2003 Neurobiology Loc. Ded.  Rule B. Prop. Basic NN
Omlin et al. [245] 2003 Medical diagnosis Dist. Ind.  Rule B. Prop. Basic NN
Bologna et al. [246] 2003 Medical diagnosis Dist. Ind.  Rule B. Prop. MLP
Obot et al. [247] 2009 Medical diagnosis Dist. Sup. C-B. Rule B. Prop. MLP
Boulahia et al. [248] 2015 UCI [220] Dist. Sup. C-B. Rule B. Prop. Basic NN
Prentzas et al. [52] 2016 Life Insurance Dist. Sup. Neurule Rule B. Prop. Basic NN
Ghosh et al. [249] 2018 Medical applications Dist. Sup.  Rule B. Prop. Basic NN
Bhatia et al. [250] 2018 Code Correction Dist. Sup. Constr.-based Rule B.  RNN
Prentzas et al. [242] 2019 Medical diagnosis Loc. Ded.  Rule B. Prop. Basic NN
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Logic Logic Type, Neural Typ. Neural Type, Sup.
Supervised, Unsup. Unsupervised, Case-B. Case-Based, Rule B. Rule Based, Prop. Propositional, Basic ANN Basic Artificial Neural Network,
RBF ANN Radial Basis Function Artificial Neural Network, MLP Multilayer Perceptron, RNN Recurrent Neural Network, LMS Least Mean
Square, Incr. Incremental, C-B. Case-Based, Constr.-based Constraint-based

Fig. 12 Type 5 neuro-symbolic AI with tensor-based transformation. converted into symbolic FoL, highlighting a system where symbolic
This visualization presents the conversion of symbolic first-order logic is seamlessly integrated with tensorial neural computation
logic (FoL) into tensors, processed by a neural network, and then re-

through more advanced neural network architectures or complex interactions between robots and their
more sophisticated symbolic reasoning mechanisms, could surroundings.
be a valuable area of exploration. e. Generalization and application of rules The successes
c. Autonomy in surveillance systems The integration of of the Neural Logic Machine (NLM) [78] and the Neuro-
virtual neural sensors with BDI agents as explored in [230] Symbolic Concept Learner (NS-CL) [179] in applying
highlights the potential for autonomous operation in generalized rules to specific tasks suggest an area for fur-
surveillance systems. However, ensuring these systems can ther research in the generalization capabilities of neuro-
operate with minimal human intervention while making symbolic systems. Investigating how these systems can
contextually appropriate decisions in dynamic scenarios is learn and apply rules across a broader range of scenarios
an ongoing challenge. Research could delve into optimiz- without significant retraining could enhance their applica-
ing the balance between neural network-driven perception bility in robotics.
and symbolic agent-driven decision-making to improve f. Causal reasoning and physical interactions The
autonomy. development of the Neuro-Symbolic Dynamic Reasoning
d. Environmental modeling and interaction Kraet- (NS-DR) model [280] addresses the need for understanding
zschmar et al.’s [226] work on environmental modeling causal relationships in dynamic environments, which is
underscores the importance of efficient and dynamic crucial for robotics. Expanding on this work to include
environment representation. Future efforts could focus on more complex physical interactions and causal mecha-
developing more sophisticated models that account for a nisms could improve the predictive and reasoning capa-
wider range of environmental variables and enable more bilities of robotic systems.

123
12830 Neural Computing and Applications (2024) 36:12809–12844

Table 11 Collection of papers with neuro-symbolic type 5 and their properties


Paper Year Domain Properties Neural Typ.
Rep. Learn. Reason. Dec. Mak. Logic

Souici et al. [251] 2004 Text Recognition Dist. Ded. Case-B. Rule B. Prop. Basic ANN
Perrier et al. [252] 2005 Autonomous vehicles Dist. Sup. Case-B. Rule B. Prop. Basic ANN
Sanchez et al. [253] 2008 Textiles Dist. Incr. Case-B. Rule B. - Basic ANN
Velik et al. [254] 2010 Computer Vision Dist. Incr. Ded.  Prop. Basic ANN
SHERLOCK [255] 2011  Dist. Ind. Ded.  F.O. Basic ANN
Saikia et al. [256] 2016 Optimization Dist. ILP Ded.  F.O. DBN
k-il [257] 2019 Medical Dist. Ind. Knowledge Graph Rule B. F.O. LSTM
Khan et al. [258] 2020 Computer Vision Dist. Sup. Knowledge Graph Rule B. F.O. DNN
Kapanipathi et al. [259] 2020 Question Answering Dist. Sup. Knowledge Graph Rule B. F.O. LNN
Neurasp [260] 2020 Computer Vision Dist. Unsup. Common Sense Rule B. F.O. Basic ANN
NSSE [261] 2021 Aircraft Maintenance Dist. Sup. Knowledge Graph Rule B. F.O. LSTM
Stammer et al. [262] 2021 Computer Vision Dist. Unsup. Ded. Rule B. F.O. CNN
Kimura et al. [263] 2021 Question Answering Dist. Sup. Knowledge Graph Rule B. F.O. LNN
Evans et al. [264] 2021 Computer Vision Dist. Unsup.  Rule B. Prop. LSTM
PIGLeT [177] 2021 Question Answering Dist. Unsup. Common Sense Rule B. Prop. LSTM
DUA [265] 2022 Optimization Dist. ILP Inductive Rule B. F.O. 
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Logic Logic Type, Neural Typ. Neural Type, Sup.
Supervised, Ded. Deductive, Incr. Incremental, Case-B. Case-Based, Rule B. Rule Based, Prop. Propositional, Basic ANN Basic Artificial
Neural Network, RBF ANN Radial Basis Function Artificial Neural Network, MLP Multilayer Perceptron, RNN Recurrent Neural Network, LMS
Least Mean Square, ILP Inductive Logic Programming, DBN Deep Belief Network, LSTM Long Short-Term Memory, DNN Deep Neural
Network, LNN Logical Neural Network

Fig. 13 Type 6 neuro-symbolic AI integration model. The process conceptualizes the ideal of a fully integrated system, embedding a
begins with a neural unit that feeds into a series of logical units, symbolic reasoning engine within a neural framework. As proposed
symbolizing the transition from sub-symbolic neural processing to by Kautz, it symbolizes the aspiration for a comprehensive AI model
higher-level logical reasoning. This represents an advanced form of capable of both Kahneman’s intuitive (System 1) and deliberate
integration where the neural network output is not just interpreted but (System 2) thinking processes
also informs and shapes logical unit operations. This illustration

Addressing these limitations and exploring these view- AI, blending the strengths of neural networks’ data pro-
points could significantly advance the field of neuro-sym- cessing with symbolic AI’s logical reasoning. Notably,
bolic AI in robotics, leading to more capable, adaptable, models like Word2Vec and GloVe have revolutionized
and intelligent robotic systems. word representation, enabling AI systems to understand
and process natural language queries more effectively.
3.2 Neuro-symbolic AI in question answering Mikolov et al.’s work on efficient word representations
[199] and Pennington et al.’s development of GloVe [200]
The field of question answering (QA) has seen remarkable have set significant milestones in semantic understanding,
advancements through the integration of neuro-symbolic essential for interpreting complex questions.

123
Neural Computing and Applications (2024) 36:12809–12844 12831

Table 12 Collection of papers with neuro-symbolic type 6 and their properties


Paper Year Domain Properties Neural Typ.
Rep. Learn. Reason. Dec. Mak. Logic

Alshahrani et al. [266] 2017 Biology Dist. Unsup. K. Graph Rule B. F.O. G. Embed. [267]
Agibetov et al. [268] 2018 Biology Dist. Unsup. K. Graph Rule B. F.O. G. Embed. [269]
Bianchi et al. [270] 2019 DBpedia Dist. Unsup. K. Graph Rule B. F.O. G. Embed. [271]
Oltramari et al. [272] 2019 Question Answering [273] Dist. Unsup. K. Graph Rule B. F.O. G. Embed. [274]
Doldy et al. [275] 2021 Edge Computing Dist. Unsup. K. Graph Rule B. F.O. G. Embed. [276]
Sun et al. [277] 2021 Table Understanding Dist. Unsup. PSL [278] Rule B. F.O. G. Embed. [279]
Rep. Representation, Learn. Learning, Reason. Reasoning, Dec. Mak. Decision Making, Logic Logic Type, Neural Typ. Neural Type, Unsup.
Unsupervised, K. Graph Knowledge Graph, Rule B. Rule Based, F.O. First Order, G. Embed. Graph Embedding

Further enhancing QA systems, the Neuro-Symbolic Mikolov et al. [199] and Pennington et al. [200], respec-
Program Synthesis (NSPS) approach [62] exemplifies the tively, has been instrumental in enhancing semantic
seamless integration of symbolic knowledge into neural understanding in QA systems. Future research could delve
frameworks, enabling the execution of symbolic programs into further improving word representation models to
for query resolution. This method stands out for its per- capture nuanced linguistic features and contextual mean-
formance on benchmark datasets like WikiTableQuestions ings, potentially through more advanced and higher
and Spider, highlighting its efficacy in deriving accurate dimensional integration of symbolic knowledge.
answers from structured data. b. Symbolic program execution for query resolution The
Innovations such as the PIGLeT model by Zellers et al. Neuro-Symbolic Program Synthesis (NSPS) approach
[177] introduce a novel dimension to QA by grounding introduced by Parisotto et al. [62] exemplifies the suc-
language in a 3D world, merging physical common-sense cessful incorporation of symbolic knowledge into neural
with linguistic understanding. This dual approach, com- frameworks for query resolution. However, extending the
bining a physical dynamics model with a language model, applicability of such models to a broader range of natural
allows for the prediction and verbalization of object language queries and diverse datasets remains a challenge,
interactions, showcasing the model’s proficiency in neuro- inviting further exploration into adaptable and scalable
symbolic interaction. neuro-symbolic integration techniques.
Research by Weber et al. [188], which integrates Pro- c. Grounding language in physical reality The PIGLeT
log’s reasoning with natural language processing, and the model by Zellers et al. [177] merges physical common-
comparative study by Ma et al. [281] on common-sense sense with linguistic understanding, a novel approach in
QA, further illustrate the diversity of strategies employed QA. Expanding on this, future work could focus on
to enhance question understanding and answer generation. enhancing the integration of physical dynamics models
These studies underscore the importance of knowledge with language models to improve the prediction and ver-
base compatibility and the integration techniques’ role in balization of complex object interactions, moving toward
model performance, advocating for a hybrid approach that more holistic neuro-symbolic systems that can reason
leverages both data-driven and knowledge-driven pro- about both the physical and linguistic aspects of queries.
cesses for superior reasoning and explainability in AI d. Knowledge base compatibility and reasoning Studies
systems. such as those by Weber et al. [188] highlight the impor-
Through these pioneering works, the QA domain con- tance of integrating reasoning capabilities, like those in
tinues to evolve, with neuro-symbolic AI playing a pivotal Prolog, with natural language processing for QA.
role in developing more nuanced, context-aware systems Enhancing knowledge base compatibility and the tech-
capable of tackling the intricacies of human language and niques for integrating symbolic reasoning into neural
cognition. This fusion has led to more sophisticated natural models could lead to more accurate and explainable QA
language understanding and processing, essential for systems. Research could explore advanced methods for
interpreting and responding to complex queries. Some key seamlessly merging data-driven insights with structured
viewpoints in this domain can be: knowledge bases to improve reasoning and context-
a. Semantic understanding and word representation The awareness in responses.
development of models like Word2Vec and GloVe by

123
12832 Neural Computing and Applications (2024) 36:12809–12844

e. Hybrid approaches for enhanced reasoning and (DIMLP) [246] furthers the transparency of neural net-
explainability The diversity of strategies employed in the works in medical diagnostics, enabling rule extraction that
QA domain underscores the potential of hybrid approaches aligns with neural network responses and uncovering sig-
that combine data-driven and knowledge-driven processes. nificant biomarkers for disease classification.
Future research could investigate new methods for lever- The framework by Obot and Uzoka [247] represents a
aging both neural network capabilities and symbolic AI’s comprehensive integration of case-based, rule-based, and
structured reasoning to create QA systems with superior neural network methodologies, overcoming individual
reasoning, adaptability, and explainability. limitations and providing a robust diagnostic tool. This
Addressing these viewpoints and limitations could sig- hybrid system has shown strong correlations with con-
nificantly advance the field of QA, leading to the devel- ventional neural network results while offering additional
opment of AI systems that are not only more capable of explanatory insights, marking a significant step toward
handling complex queries but also more intuitive and explainable and reliable medical AI applications.
aligned with human cognitive processes. The application of neuro-symbolic AI in the medical
domain offers promising advancements, particularly in
3.3 Neuro-symbolic AI in medical applications enhancing clinical decision support systems by merging the
precision of symbolic AI with the adaptability of neural
The medical industry presents a promising landscape for networks. This integration facilitates more accurate and
the integration of neuro-symbolic AI, significantly personalized diagnoses, improving patient care through
advancing clinical decision support systems. By blending more insightful analyses of complex medical data. Some
the analytical precision of symbolic AI with the adapt- key viewpoint might be:
ability of neural networks, neuro-symbolic reasoning a. Diagnostic accuracy and personalization: The capa-
(NSR) has been effectively employed for more accurate bility of neuro-symbolic reasoning (NSR) in precise med-
and personalized diagnoses. Research has demonstrated ical diagnosis, such as the identification of acute abdominal
NSR’s capability in accurately identifying acute abdominal pain, illustrates its potential in refining diagnostic processes
pain, showcasing its potential in improving diagnostic [282]. Future research could focus on expanding the range
accuracy [282]. of medical conditions NSR can accurately diagnose,
Further, neuro-symbolic integration (NSI) has been ensuring broader applicability and personalization in
applied to electronic health records analysis, combining patient care.
deep learning with symbolic reasoning to extract actionable b. Interpretability of high-dimensional data The Neuro-
insights, potentially enhancing patient care [283]. The Data-Mine framework by Ultsch [210] emphasizes the
Neuro-Data-Mine framework by Ultsch [210] is notable for importance of transforming sub-symbolic data into a
its efficient transformation of sub-symbolic to symbolic symbolic format to make complex medical data more
data, crucial for making high-dimensional medical data interpretable. Enhancing these transformation techniques
interpretable. This approach underlines the utility of neuro- could further improve the clarity and usability of medical
symbolic methods in complex tasks like cerebrospinal fluid data, aiding in more nuanced data analysis and decision-
analysis, emphasizing their role in advancing precision making in healthcare.
medicine through improved data analysis and c. Efficiency in knowledge base management The inte-
intelligibility. gration of production rules with neural units, as demon-
Hybrid formalisms, such as those proposed by Hatzi- strated by Hatzilygeroudis and Prentzas [237], showcases
lygeroudis and Prentzas [237], integrate production rules the potential for neuro-symbolic systems to streamline
with neural units to streamline knowledge bases, demon- knowledge bases and improve inference efficiency in
strating improved inference efficiency in medical contexts medical diagnostics. Research could explore advanced
like bone inflammation diagnosis. This approach highlights hybrid formalisms that further optimize knowledge base
the effectiveness of neuro-symbolic systems in managing management and inference processes in medical
complex decision-making and pattern recognition tasks, applications.
offering superior performance compared to traditional d. Transparency in medical diagnostics The develop-
methods. ment of models like the discretized interpretable multi-
Omlin and Snyders’ work [245] on inductive bias in layer perceptron (DIMLP) by Bologna [246] highlights the
neural networks, tailored by prior knowledge, showcases need for transparency in neural network-based medical
the potential of neuro-symbolic approaches in medical diagnostics. Future efforts could aim at enhancing rule
analysis, such as breast tissue characterization from mag- extraction techniques to align more closely with neural
netic resonance spectroscopy. Bologna’s development of network responses, facilitating the identification of critical
the discretized interpretable multi-layer perceptron

123
Neural Computing and Applications (2024) 36:12809–12844 12833

biomarkers and disease classifications with greater accu- work underscores the potential of deep learning models to
racy and interpretability. maintain a balance between accuracy and interpretability, a
e. Comprehensive diagnostic tools The comprehensive crucial aspect in the application of AI in sensitive fields
framework by Obot and Uzoka [247], which combines such as medical diagnostics.
case-based, rule-based, and neural network methodologies, In the realm of multimedia and language integration,
overcomes the limitations of individual approaches and Burattini et al. [227] and Grieco et al. [228] have con-
offers a more robust diagnostic tool. Expanding this inte- tributed significantly by exploring the synergy between
gration to incorporate the latest advancements in neural verbal and visual information and the concept of generating
network architectures and symbolic reasoning methods pattern examples from ‘‘mental’’ images, respectively.
could yield even more powerful and explainable medical These studies highlight the multifaceted nature of neuro-
diagnostic systems. symbolic AI in bridging the gap between cognitive rea-
Addressing these aspects could significantly advance soning and sensory perception, offering novel insights into
neuro-symbolic AI’s contribution to the medical field, pattern recognition and generation.
leading to the development of highly effective, transparent, The neuro-symbolic approach has also been pivotal in
and patient-centric clinical decision support systems. spatial-temporal pattern analysis, as demonstrated by Bar-
bosa et al. [233, 234] in their work on GPS trajectory
3.4 Neuro-symbolic AI in computer vision classification. Their methodology exemplifies the integra-
tion of neural network adaptability with symbolic AI’s
In the evolving landscape of computer vision, neuro-sym- structured logic, enhancing the interpretability and com-
bolic AI has emerged as a pivotal force, driving innova- putational efficiency of trajectory analysis.
tions across various domains including object recognition, Moreover, the exploration of reasoning, vision, and
scene interpretation, and image categorization. The inte- language understanding by Yi et al. [235] through Neural-
gration of symbolic reasoning with deep learning models, Symbolic Visual Question Answering (VQA) and the
facilitated by approaches like graph neural networks advancements in multimedia event processing by Khan and
(GNNs) [244], has enabled the embedding of items and Curry [258] further underscore the breadth of neuro-sym-
relations within external knowledge bases, such as bolic AI’s application in computer vision and beyond.
ontologies or knowledge graphs, enhancing the interpretive As the field continues to evolve, the focus on developing
capabilities of AI systems in understanding complex visual sophisticated neuro-symbolic architectures that seamlessly
content. combine the learning process of neural networks with the
A notable advancement in this field is the Neuro-Sym- structured knowledge representation of symbolic systems
bolic Concept Learner (NS-CL) framework [179], which remains paramount. The future of computer vision lies in
leverages GNNs to encode the relationships between visual creating more adaptable and generalized models that not
features and their corresponding concepts within a only mimic human visual capabilities but also encapsulate
knowledge graph, thereby predicting potential concepts in transparent and comprehensible reasoning processes,
new images. This framework exemplifies the fusion of sub- bridging the chasm between artificial intelligence and
symbolic learning with symbolic knowledge, where logical human cognition. Some key viewpoints in this domain can
principles are rendered into fuzzy relations using logic be:
tensor networks (LTNs) [62, 66–68], offering a robust a. Enhancing interpretive capabilities The integration of
mechanism for interpreting visual scenes and reasoning graph neural networks (GNNs) with symbolic reasoning
about abstract ideas. has facilitated the embedding of visual elements and their
The application of neuro-symbolic AI in computer relationships within external knowledge bases, improving
vision is vividly illustrated in the work of Golovko et al. AI systems’ ability to understand intricate visual scenes.
[207], who developed an intelligent decision support sys- Future research could focus on refining these integrations
tem (IDSS) for enhancing product labeling quality control. to handle more complex, abstract visual concepts and their
This system epitomizes the synergy between deep neural interrelations.
networks, for image localization and recognition, and b. Predicting concepts in images The Neuro-Symbolic
semantic networks, for intelligent data processing, Concept Learner (NS-CL) framework represents a leap in
demonstrating the efficacy of neuro-symbolic approaches encoding relationships between visual features and con-
in real-world manufacturing environments. cepts within knowledge graphs. Expanding this framework
Further enriching the discourse, Bologna and Hayashi to encompass a broader array of concepts and visual fea-
[225] explored the transparency of deep learning systems tures could further enhance the predictive accuracy and
by characterizing symbolic rules within deep discretized applicability of neuro-symbolic systems in computer
interpretable multi-layer perceptrons (DIMLPs). Their vision.

123
12834 Neural Computing and Applications (2024) 36:12809–12844

c. Real-world application in manufacturing The intelli- remains to be fully automated. Neuro-symbolic AI tech-
gent decision support system developed by Golovko et al. niques, which combine symbolic reasoning with neural
[207] exemplifies the practical application of neuro-sym- network models, have shown promise in overcoming this
bolic AI in enhancing product labeling quality control. challenge, enabling effective program synthesis for tasks
Research aimed at extending such systems to other man- such as sorting or searching algorithms [62]. Moreover,
ufacturing domains could revolutionize quality assurance neuro-symbolic AI extends to enhancing software effi-
processes across various industries. ciency, where optimizations are discovered by blending
d. Balancing accuracy and interpretability The work by symbolic reasoning with insights derived from neural net-
Bologna and Hayashi [225] on characterizing symbolic work training on program execution patterns.
rules within deep learning models highlights the impor- In the domain of programming and optimization, Bhatia,
tance of maintaining a balance between model accuracy Kohli, and Singh [250] introduced a groundbreaking neuro-
and interpretability. Future efforts could explore novel symbolic program corrector tailored for introductory pro-
methodologies to enhance the transparency and explain- gramming assignments. This tool harnesses both neural
ability of deep learning models without compromising their networks and symbolic AI to identify and rectify errors in
performance. student-submitted code, providing an automated and
e. Bridging cognitive reasoning and sensory perception intelligent feedback system that enhances the learning
Studies by Burattini et al. [227] and Grieco et al. [228] experience for programming novices. The neuro-symbolic
underline the potential of neuro-symbolic AI in integrating approach not only detects syntactic errors but also grasps
verbal and visual information and generating pattern the semantic intent behind the code, ensuring corrections
examples from ‘‘mental’’ images. Advancing these are accurate and contextually relevant.
approaches could offer deeper insights into cognitive pro- Sen et al. [87] present a novel approach to inductive
cesses and sensory perception, facilitating more intuitive logic programming (ILP) by integrating it with logical
human-AI interactions. neural networks (LNNs), offering a neuro-symbolic ILP
f. Spatial-temporal pattern analysis The methodology framework that merges ILP’s structured reasoning with the
employed by Barbosa et al. [233, 234] for GPS trajectory adaptability of LNNs. This combination facilitates the
classification demonstrates the effectiveness of combining extraction and refinement of logical rules from data,
neural network adaptability with symbolic logic. Further marking a significant advancement in AI, particularly in
research in this area could enhance the interpretability and programming and optimization.
efficiency of analyzing spatial-temporal patterns, with Chaudhuri et al. [19] delve into neuro-symbolic pro-
broad implications for navigation, urban planning, and gramming, highlighting the fusion of neural networks with
environmental monitoring. symbolic programming paradigms to address the limita-
g. Integrating reasoning, vision, and language The tions of purely data-driven or rule-based systems. This
exploration of neuro-symbolic approaches in tasks like synthesis represents a pivotal shift toward creating more
Visual Question Answering (VQA) by Yi et al. [235] and adaptable, interpretable, and robust AI systems in the
multimedia event processing by Khan and Curry [258] programming and optimization domain.
showcases the vast potential of neuro-symbolic AI beyond Yin and Neubig [284] introduce a syntactic neural
traditional computer vision tasks. Expanding these model for general-purpose code generation, leveraging
methodologies to more complex, multimodal interactions structural patterns in programming languages to generate
could significantly advance AI’s cognitive capabilities. code from natural language descriptions. This advancement
Addressing these aspects could propel the field of holds significant promise for automating coding tasks and
computer vision forward, leading to the development of AI bridging the gap between natural language processing and
systems that not only emulate human visual and cognitive software engineering.
abilities but also offer transparent and understandable Ritchie et al. [285] explore the application of neuro-
reasoning processes, narrowing the gap between artificial symbolic models in computer graphics, addressing the
intelligence and human-like cognition. challenges of generating, rendering, and manipulating
graphical content. This novel integration promises to rev-
3.5 Neuro-symbolic AI in programming olutionize computer graphics by introducing more intelli-
and optimization gent and adaptable systems.
Reddy and Balasubramanian [286] explore estimating
The science of computer programming and optimization treatment effects using Neuro-Symbolic Program Synthe-
has greatly benefited from the integration of neuro-sym- sis, offering a nuanced understanding of treatment efficacy
bolic AI. The objective of program synthesis is to generate and potentially transforming fields such as healthcare and
programs that fulfill a specified task, a challenge that policy analysis.

123
Neural Computing and Applications (2024) 36:12809–12844 12835

Li, Huang, and Naik [287] introduce ‘‘Scallop,’’ a lan- Future efforts could focus on developing sophisticated
guage designed for neuro-symbolic programming, aiming frameworks and languages that ease the integration of
to bridge the gap between neural and symbolic computing neural and symbolic components, enhancing AI’s adapt-
paradigms and facilitate the development of neuro-sym- ability and interpretability across various applications.
bolic applications. d. Cross-disciplinary applications and innovations The
Varela’s doctoral dissertation [288] investigates the exploration of neuro-symbolic AI in fields such as com-
impact of hybrid neural networks on meta-learning objec- puter graphics [285] and healthcare [286] illustrates its
tives, shedding light on the potential of hybrid networks to versatile applicability. Research aimed at exploring and
enhance the efficiency and effectiveness of meta-learning expanding neuro-symbolic AI’s capabilities in diverse
processes. domains could unlock new possibilities for innovative
Mundhenk et al. [289] explore symbolic regression via applications, from digital media to precision medicine.
neural-guided genetic programming, aiming to enhance the e. Automating the design of optimization algorithms The
efficiency and accuracy of symbolic regression tasks by initiative to automate the discovery of optimization algo-
leveraging the strengths of neural networks. rithms [290] opens up new research avenues in making AI
Chen et al. [290] embark on the symbolic discovery of systems more efficient and autonomous. Investigating
optimization algorithms, signifying a pivotal shift toward autonomous methods for identifying and implementing
automating the design of optimization algorithms and optimizations could lead to breakthroughs in computational
potentially accelerating the advancement of AI and com- efficiency and AI model performance.
putational sciences. By focusing on these consolidated themes, future
The infusion of neuro-symbolic AI into programming research in neuro-symbolic AI within the programming and
and optimization heralds a promising horizon, marked by optimization domain can address existing challenges and
enhanced learning tools, innovative problem-solving unlock new potentials, paving the way for more intelligent,
methodologies, and a deeper understanding of complex efficient, and user-friendly AI systems.
systems. While strides have been made, the journey toward
fully realizing the potential of neuro-symbolic AI contin-
ues, with future research poised to tackle the remaining 4 Challenges
challenges of scalability, interpretability, and the seamless
integration of neural and symbolic systems. The key points The subject of neuro-symbolic AI is expanding quickly,
from the programming and optimization domain can be thanks to its ability to integrate deep learning methods with
consolidated into broader themes to capture the essence of symbolic reasoning to produce more robust and versatile
current achievements and future directions: AI systems. There are, however, obstacles that must be
a. Advancements in program synthesis and software overcome before its full potential may be tapped. The
optimization The progress in automating program synthe- following are some of the major obstacles facing neuro-
sis, exemplified by neuro-symbolic techniques [62], and the symbolic AI:
strides in enhancing software efficiency underscore the Integration of deep learning and symbolic reasoning A
potential of neuro-symbolic AI in transforming software critical challenge lies in the effective amalgamation of
development practices. Future research could aim to extend neural and symbolic components, a task that requires
these methodologies to more complex and diverse pro- innovative architectural designs and learning paradigms.
gramming tasks, further automating and optimizing soft- The question of how to seamlessly integrate these com-
ware development processes. ponents without diluting their respective strengths remains
b. Improving programming education and software open. Works like the Neuro-Symbolic Concept Learner
development Innovations such as the neuro-symbolic pro- (NS-CL) and Logical Tensor Networks (LTNs) offer
gram corrector [250] highlight the potential for AI to sig- promising directions, yet the quest for a universally effi-
nificantly impact programming education by providing cient integration strategy continues. This challenge is
more nuanced error detection and correction. Extending compounded by the need for sophisticated representation
these tools to accommodate a wider range of programming schemes that can encapsulate symbolic structures within
languages and complexities could revolutionize learning the fluidity of neural architectures, ensuring that the
experiences and software development workflows. extracted symbolic knowledge retains its logical integrity
c. Expanding the scope of neuro-symbolic integration and is amenable to rigorous reasoning processes.
The work in inductive logic programming [87], neuro- Need of a spatial-temporal explainable learning and
symbolic programming paradigms [19], and dedicated reasoning framework Developing frameworks that can
neuro-symbolic programming languages [287] demon- interpret and reason about spatial-temporal data with
strates the evolving landscape of neuro-symbolic AI. transparency, as highlighted by the need for explainable

123
12836 Neural Computing and Applications (2024) 36:12809–12844

neuro-symbolic AI in applications like smart city man- The literature emphasizes the importance of trans-
agement and environmental monitoring, is paramount. parency, fairness, and accountability in AI systems to
Innovations such as CIL2 P [36] and NSL [38] showcase address these challenges. For instance, the concept of
strides toward this goal, yet the quest for fully explainable ‘‘algorithmic auditing’’ has been proposed as a means to
and generalizable systems persists. The integration of scrutinize and evaluate the ethical implications of AI
graph neural networks (GNNs) with symbolic reasoning algorithms, including those used in neuro-symbolic sys-
mechanisms offers a pathway to imbue AI systems with an tems. This process involves a thorough examination of the
enhanced understanding of spatial-temporal dynamics, algorithms’ decision-making processes, data sources, and
pertinent to domains such as environmental modeling and outcomes to identify potential biases and ensure that the
autonomous navigation. The endeavor to refine these systems operate within ethical boundaries [293].
frameworks, extending their applicability and accuracy, Moreover, the development of interpretable models is
stands as a crucial frontier in neuro-symbolic AI research. advocated to enhance the transparency of AI systems,
Data quality and bias The quality and representative- making it easier to understand how decisions are made and
ness of training data are crucial across domains. Biases on what basis. This is particularly relevant for neuro-
inherent in the data can lead to skewed AI models, making symbolic AI, where the rationale behind decisions should
the development of comprehensive and unbiased datasets, be accessible and understandable to users, especially in
as well as algorithms capable of identifying and correcting high-stakes domains such as healthcare, criminal justice,
for bias, a universal challenge. and public policy [292].
Human–machine collaboration Enhancing interfaces Addressing the ethical challenges of bias and fairness in
and methodologies to foster effective human-AI collabo- neuro-symbolic AI also involves considering the broader
ration is vital. While frameworks like NSBL [42] and NTN societal impacts of these technologies. The potential for
[56] have made progress, creating systems that intuitively reinforcement of existing social inequalities through biased
integrate human insights and AI capabilities remains a decision-making underscores the need for ethical frame-
broad challenge. works that prioritize inclusivity, equity, and justice.
Representation and handling of abstract knowledge The Engaging with diverse perspectives and disciplines can
ability to represent and reason about abstract knowledge, a provide a more comprehensive understanding of the social
theme recurrent in works from neuro-symbolic cognitive implications of neuro-symbolic AI and guide the devel-
architectures like MicroPsi [58] to logic-enhanced models opment of more ethical and fair AI systems [294, 295].
like LTN [66], is a critical hurdle. Expanding AI’s capacity Finally, the effects of neuro-symbolic AI on the labor
to manage abstract concepts through novel neuro-symbolic market are a source of worry. Ethical concerns regarding
integrations is essential for advancing AI’s cognitive the social effect and the necessity for retraining and edu-
capabilities. cation is raised as technology develops and threatens
Ethical considerations As neuro-symbolic AI continues human jobs in specific sectors. Concerns about the morality
to evolve, it is imperative to address the ethical challenges of developing and deploying neuro-symbolic AI must be
that accompany its development and application. The addressed if the technology is to be utilized for the greater
integration of neural networks with symbolic reasoning good of society. ‘‘Neuro-symbolic AI should ensure
introduces complex ethical dimensions that warrant careful transparency by making decision-making processes
consideration. understandable, uphold accountability through clear
Neuro-symbolic AI systems, by leveraging the strengths delineation of responsibility for decisions, maintain fair-
of both neural networks and symbolic AI, have the ness by actively mitigating biases in data and algorithms,
potential to address complex problems with a high degree protect privacy by safeguarding personal data, and adhere
of interpretability and adaptability. However, the integra- to non-maleficence by preventing harm and ensuring the
tion of these two paradigms introduces complexities in benefits of AI applications outweigh potential risks.’’
identifying and mitigating biases. Neural networks, known As we navigate the future of neuro-symbolic AI, a
for their capacity to learn from vast datasets, may inad- multidisciplinary approach that amalgamates insights from
vertently encode and amplify existing biases within the cognitive science, computer science, and ethics is para-
data, leading to decisions that can perpetuate societal mount. The exploration of novel integration strategies,
inequalities. Symbolic AI, while providing a framework for advanced representation techniques, and ethical frame-
logical reasoning and interpretability, relies on the pre- works will be instrumental in realizing the full potential of
mises and rules defined by humans, which can also be a neuro-symbolic AI across its diverse applications. The
source of bias [291, 292]. journey ahead, while fraught with challenges, holds the
promise of transformative breakthroughs that could

123
Neural Computing and Applications (2024) 36:12809–12844 12837

redefine the paradigms of artificial intelligence in an array 2. Hassan AM, Rajesh A, Asaad M, Nelson JA, Coert JH, Mehrara
of domains. BJ, Butler CE (2023) Artificial intelligence and machine
learning in prediction of surgical complications: current state,
applications, and implications. Am Surg 89(1):25–30
3. Novakovsky G, Dexter N, Libbrecht MW, Wasserman WW,
5 Conclusion Mostafavi S (2023) Obtaining genetics insights from deep
learning via explainable artificial intelligence. Nat Rev Genet
24(2):125–137
As this article has shown, neuro-symbolic AI is gaining 4. Jebamikyous H, Li M, Suhas Y, Kashef R (2023) Leveraging
traction in the area of AI as it seeks to integrate the best machine learning and blockchain in e-commerce and beyond:
features of both symbolic reasoning and connectionist benefits, models, and application. Discov Artif Intell 3(1):3
learning. Throughout this study, we have covered the 5. Rawat W, Wang Z (2017) Deep convolutional neural networks
for image classification: a comprehensive review. Neural
representation, learning, reasoning, and decision-making Comput 29(9):2352–2449
aspects of neuro-symbolic AI. Robotics, question answer- 6. Bond-Taylor S, Leach A, Long Y, Willcocks CG (2021) Deep
ing, healthcare, computer vision, and programming are just generative modelling: a comparative review of vaes, gans,
a few of the areas where neuro-symbolic AI has found normalizing flows, energy-based and autoregressive models.
IEEE Trans Pattern Anal Mach Intell
success. The limits and difficulties of neuro-symbolic AI, 7. Shakarami A, Ghobaei-Arani M, Shahidinejad A (2020) A
including its scalability, explainability, and ethical impli- survey on the computation offloading approaches in mobile edge
cations, have also been examined. There is still a long way computing: a machine learning-based perspective. Comput
to go, but neuro-symbolic AI shows promise for creating Netw 182:107496
8. Li B, Qi P, Liu B, Di S, Liu J, Pei J, Yi J, Zhou B (2023)
AI systems with human-level intelligence and resemblance. Trustworthy ai: From principles to practices. ACM Comput
Surv 55(9):1–46
Acknowledgments This work is supported by the ‘‘ADI 2022’’ pro- 9. Augusto LM (2021) From symbols to knowledge systems: A.
ject funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02. Newell and Ha Simon’s contribution to symbolic ai
10. Newell A (1980) Physical symbol systems. Cogn Sci
Author contributions Conceptualisation, B.P.B., A.R.C., T.P.S. and 4(2):135–183
R.T.; methodology, B.P.B., A.R.C. and R.T.; software, B.P.B., A.R.C. 11. Newell A (1982) The knowledge level. Artif Intell 18(1):87–127
and R.T.; validation, B.P.B., A.R.C., T.P.S. and R.T.; formal analysis, 12. Uschold M, Gruninger M (1996) Ontologies: principles, meth-
B.P.B., A.R.C. and R.T.; investigation, B.P.B., A.R.C. and R.T.; ods and applications. knowl Eng Rev 11(2):93–136
resources, B.P.B., A.R.C. and R.T.; data curation, B.P.B., A.R.C. and 13. Reed SK, Pease A (2017) Reasoning from imperfect knowledge.
R.T.; writing—original draft preparation, B.P.B.; writing—review Cogn Syst Res 41:56–72
and editing, B.P.B., T.P.S., A.R.C. and R.T.; visualisation, B.P.B.; 14. Youheng Z (2023) A historical review and philosophical
supervision, A.R.C. and R.T.; project administration, B.P.B., A.R.C., examination of the two paradigms in artificial intelligence
T.P.S. and R.T.; funding acquisition, R.T. All authors have read and research. Eur J Artif Intell Mach Learn 2(2):24–32
agreed to the published version of the manuscript. 15. Wermter S, Sun R An overview of hybrid neural systems.
Subseries of Lecture Notes in Computer Science Edited by JG
Funding This research received no external funding. Carbonell and J. Siekmann, 1
16. Garcez ASd, Broda KB, Gabbay DM Neural-symbolic learning
Availability of data and materials Data sharing is not applicable to systems foundations and applications
this article as no datasets were generated or analyzed during the 17. Hammer B, Hitzler P (2007) Perspectives of neural-symbolic
current study. integration vol 77
18. Sun R, Alexandre F (2013) Connectionist-symbolic integration:
Code availability Not applicable. from unified to hybrid approaches
19. Chaudhuri S, Ellis K, Polozov O, Singh R, Solar-Lezama A,
Declarations Yue Y (2021) Neurosymbolic programming. Found TrendsÒ
Program Lang 7(3):158–243
20. Hitzler P, Eberhart A, Ebrahimi M, Sarker MK, Zhou L (2022)
Conflict of interest The authors declare no conflict of interest
Neuro-symbolic approaches in artificial intelligence. Natl Sci
Ethics approval Not applicable. Rev 9(6):035
21. Velik R (2008) A bionic model for human-like machine
Consent to participate Not applicable. perception
22. Gallagher K (2018) Request confirmation networks: a cortically
Consent for publication The authors consent to the publication of this inspired approach to neuro-symbolic script execution. PhD
work. thesis, Harvard University
23. Martin LJ (2021) Neurosymbolic automated story generation.
PhD thesis, Georgia Institute of Technology
24. Corchado JM, Aiken J (2002) Hybrid artificial intelligence
References methods in oceanographic forecast models. IEEE Trans Syst
Man Cybern Part C (Appl Rev) 32(4):307–313
1. Helm JM, Swiergosz AM, Haeberle HS, Karnuta JM, Schaffer 25. Hatzilygeroudis I, Prentzas J (2004) Neuro-symbolic approaches
JL, Krebs VE, Spitzer AI, Ramkumar PN (2020) Machine for knowledge representation in expert systems. Int J Hybrid
learning and artificial intelligence: definitions, applications, and Intell Syst 1(3–4):111–126
future directions. Curr Rev Musculoskelet Med 13:69–76

123
12838 Neural Computing and Applications (2024) 36:12809–12844

26. Öztürk P, Tidemann A (2014) A review of case-based reasoning 49. Prentzas J, Hatzilygeroudis I (2011) Neurules-a type of neuro-
in cognition-action continuum: a step toward bridging symbolic symbolic rules: an overview. Springer, Berlin, pp 145–165
and non-symbolic artificial intelligence. Knowl Eng Rev 50. Prentzas J, Hatzilygeroudis I (2011) Efficiently merging sym-
29(1):51–77 bolic rules into integrated rules
27. Besold TR, Garcez Ad, Bader S, Bowman H, Domingos P, 51. Hatzilygeroudis I, Prentzas J (2015) Symbolic-neural rule based
Hitzler P, Kühnberger K-U, Lamb LC, Lowd D, Lima PMV et al reasoning and explanation. Expert Syst Appl 42(9):4595–4609
(2017) Neural-symbolic learning and reasoning: a survey and 52. Prentzas J, Hatzilygeroudis I (2016) Assessment of life insur-
interpretation. arXiv preprint arXiv:1711.03902 ance applications: an approach integrating neuro-symbolic rule-
28. Garnelo M, Shanahan M (2019) Reconciling deep learning with based with case-based reasoning. Expert Syst 33(2):145–160
symbolic artificial intelligence: representing objects and rela- 53. Sreelekha S (2018) Neurosymbolic integration with uncertainty.
tions. Curr Opin Behav Sci 29:17–23 Ann Math Artif Intell 84(3–4):201–220
29. Garcez Ad, Gori M, Lamb LC, Serafini L, Spranger M, Tran SN 54. Prentzas J, Hatzilygeroudis I (2018) Using clustering algorithms
(2019) Neural-symbolic computing: an effective methodology to improve the production of symbolic-neural rule bases from
for principled integration of machine learning and reasoning. empirical data. Int J Artif Intell Tools 27(02):1850002
arXiv preprint arXiv:1905.06088 55. Borges RV, Garcez Ad, Lamb LC (2011) Learning and repre-
30. De Raedt L, Dumančić S, Manhaeve R, Marra G (2020) From senting temporal knowledge in recurrent networks. IEEE Trans
statistical relational to neuro-symbolic artificial intelligence. Neural Netw 22(12):2409–2421
arXiv preprint arXiv:2003.08316 56. Socher R, Chen D, Manning CD, Ng A (2013) Reasoning with
31. Sarker MK, Zhou L, Eberhart A, Hitzler P (2021) Neuro-sym- neural tensor networks for knowledge base completion. In:
bolic artificial intelligence. AI Commun 34(3):197–209 Advances in neural information processing systems, vol 26
32. Wang W, Yang Y (2022) Towards data-and knowledge-driven 57. Riveret R, Pitt JV, Korkinof D, Draief M (2015) Neuro-sym-
artificial intelligence: a survey on neuro-symbolic computing. bolic agents: Boltzmann machines and probabilistic abstract
arXiv preprint arXiv:2210.15889 argumentation with sub-arguments. In: AAMAS, pp 1481–1489
33. Garcez Ad, Lamb LC (2023) Neurosymbolic ai: the 3rd wave. 58. Bach J (2015) Modeling motivation in micropsi 2. In: Artificial
Artif Intell Rev 56:1–20 general intelligence: 8th international conference, AGI 2015,
34. Towell GG, Shavlik JW (1994) Knowledge-based artificial AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings 8.
neural networks. Artif intell 70(1–2):119–165 Springer, pp 3–13
35. Pinkas G (1995) Reasoning, nonmonotonicity and learning in 59. Bach J (2009) Principles of synthetic intelligence psi: an
connectionist networks that capture propositional knowledge. architecture of motivated cognition, vol 4
Artif Intell 77(2):203–247 60. Varadarajan KM, Vincze M (2015) Affordance and k-tr aug-
36. Avila Garcez AS, Zaverucha G (1999) The connectionist mented alphabet based neuro-symbolic language-af-ktraans-a
inductive learning and logic programming system. Appl Intell human-robot interaction meta-language. In: 2015 20th interna-
11:59–77 tional conference on methods and models in automation and
37. França MV, Zaverucha G, Garcez AS (2014) Fast relational robotics (MMAR). IEEE, pp 394–399
learning using bottom clause propositionalization with artificial 61. Abubakar H, Masanawa SA, Yusuf S (2020) Neuro-symbolic
neural networks. Mach Learn 94:81–104 integration of hopfield neural network for optimal maximum
38. Burattini E, De Gregorio M, Francesco A (2002) Nsl: a neuro- random ksatisfiability (maxrksat) representation. J Reliab Stat
symbolic language for monotonic and non-monotonic logical Stud 13:199–220
inferences. In: SBRN, pp 256–261 62. Parisotto E, Mohamed A-r, Singh R, Li L, Zhou D, Kohli P
39. Garcez A, Lamb L (2003) Reasoning about time and knowledge (2016) Neuro-symbolic program synthesis. arXiv preprint arXiv:
in neural symbolic learning systems. In: Advances in neural 1611.01855
information processing systems, vol 16 63. Tran SN, Garcez ASd (2016) Deep logic networks: Inserting and
40. Garcez ASd, Lamb LC (2006) A connectionist computational extracting knowledge from deep belief networks. IEEE Trans
model for epistemic and temporal reasoning. Neural Comput Neural Netw Learn Syst 29(2):246–258
18(7):1711–1738 64. Hu Z, Ma X, Liu Z, Hovy E, Xing E (2016) Harnessing deep
41. Lima PMV, Morveli-Espinoza MM, Pereira GC, Franga F neural networks with logic rules. arXiv preprint arXiv:1603.
(2005) Satyrus: a sat-based neuro-symbolic architecture for 06318
constraint processing. In: Fifth international conference on 65. Rocktäschel T, Riedel S (2016) Learning knowledge base
hybrid intelligent systems (HIS’05). IEEE, p 6 inference with neural theorem provers. In: Proceedings of the
42. Burattini E, Datteri E, Tamburrini G (2005) Neuro-symbolic 5th workshop on automated knowledge base construction,
programs for robots. In: Proceedings of NeSy, vol 5 pp 45–50
43. Burattini E, De Gregorio M, Rossi S (2010) An adaptive 66. Serafini L, Garcez AS (2016) Learning and reasoning with logic
oscillatory neural architecture for controlling behavior based tensor networks. In: AI* IA 2016 advances in artificial intelli-
robotic systems. Neurocomputing 73(16–18):2829–2836 gence: XVth international conference of the Italian association
44. Sathasivam S, Velavan M (2010) Neuro symbolic integration for artificial intelligence, Genova, Italy, November 29–Decem-
using pseudo inverse rule. In: Annual international conference ber 1, 2016, Proceedings XV. Springer, pp 334–348
on advance topics in artificial intelligence, Phuket, Thailand 67. Manigrasso F, Miro FD, Morra L, Lamberti F (2021) Faster-ltn:
45. Sathasivam S (2011) Learning rules comparison in neuro-sym- a neuro-symbolic, end-to-end object detection architecture. In:
bolicintegration. Int J Appl Phys Math 1(2):129 Artificial neural networks and machine learning–ICANN 2021:
46. Sathasivam S (2012) Applying different learning rules in neuro- 30th international conference on artificial neural networks,
symbolic integration. In: Advanced materials research, vol 433. Bratislava, Slovakia, September 14–17, 2021, Proceedings, Part
Trans Tech Publ, pp 716–720 II 30. Springer, pp 40–52
47. Velik R (2010) The neuro-symbolic code of perception. J Cogn 68. Badreddine S, Garcez Ad, Serafini L, Spranger M (2022) Logic
Sci 11(2):161–180 tensor networks. Artif Intell 303:103649
48. Komendantskaya E, Broda K, Garcez A (2010) Using inductive 69. Wang G (2017) Dgcc: data-driven granular cognitive comput-
types for ensuring correctness of neuro-symbolic computations ing. Granular Comput 2(4):343–355

123
Neural Computing and Applications (2024) 36:12809–12844 12839

70. Tran SN (2017) Propositional knowledge representation and reasoning. In: Proceedings of the AAAI conference on artificial
reasoning in restricted boltzmann machines. arXiv preprint intelligence, vol 35, pp 4902–4911
arXiv:1705.10899 90. Shindo H, Dhami DS, Kersting K (2021) Neuro-symbolic for-
71. Cohen WW, Yang F, Mazaitis KR (2017) Tensorlog: Deep ward reasoning. arXiv preprint arXiv:2110.09383
learning meets probabilistic dbs. arXiv preprint arXiv:1707. 91. Škrlj B, Martinc M, Lavrač N, Pollak S (2021) autobot: evolving
05390 neuro-symbolic representations for explainable low resource
72. Palangi H, Smolensky P, He X, Deng L (2018) Question-an- text classification. Mach Learn 110:989–1028
swering with grammatically-interpretable representations. In: 92. Duan X, Wang X, Zhao P, Shen G, Zhu W (2022) Deeplogic:
Proceedings of the AAAI conference on artificial intelligence, Joint learning of neural perception and logical reasoning. IEEE
vol 32 Trans Pattern Anal Mach Intell
73. Evans R, Grefenstette E (2018) Learning explanatory rules from 93. Glanois C, Jiang Z, Feng X, Weng P, Zimmer M, Li D, Liu W,
noisy data. J Artif Intell Res 61:1–64 Hao J (2022) Neuro-symbolic hierarchical rule induction. In:
74. Minervini P, Bošnjak M, Rocktäschel T, Riedel S, Grefenstette International conference on machine learning, PMLR,
E (2020) Differentiable reasoning on large knowledge bases and pp 7583–7615
natural language. In: Proceedings of the AAAI conference on 94. Cambria E, Liu Q, Decherchi S, Xing F, Kwok K (2022) Sen-
artificial intelligence, vol 34, pp 5182–5190 ticnet 7: A commonsense-based neurosymbolic ai framework for
75. Manhaeve R, Dumancic S, Kimmig A, Demeester T, De Raedt, explainable sentiment analysis. In: Proceedings of the thirteenth
L (2018) Deepproblog: neural probabilistic logic programming. language resources and evaluation conference, pp 3829–3839
In: Advances in neural information processing systems, vol 31 95. Han Z, Cai L-W, Dai W-Z, Huang Y-X, Wei B, Wang W, Yin Y
76. De Raedt L, Manhaeve R, Dumancic S, Demeester T, Kimmig (2023) Abductive subconcept learning. Sci China Inf Sci
A (2019) Neuro-symbolic= neural? logical? probabilistic. In: 66(2):1–13
NeSy’19@ IJCAI, the 14th international workshop on neural- 96. Wermter S, Sun R (2001) The present and the future of hybrid
symbolic learning and reasoning neural symbolic systems some reflections from the nips work-
77. Manhaeve R, De Raedt L, Kimmig A, Dumancic S, Demeester shop. AI Mag 22(1):123–123
T (2019) Deepproblog: integrating logic and learning through 97. Kelley TD (2003) Symbolic and sub-symbolic representations in
algebraic model counting. In: KR2ML Workshop@ Neurips’19, computational models of human cognition: what can be learned
Vancouver, Canada from biology? Theory Psychol 13(6):847–860
78. Dong H, Mao J, Lin T, Wang C, Li L, Zhou D (2019) Neural 98. Rapaport WJ (2003) How to pass a turing test: Syntactic
logic machines. arXiv preprint arXiv:1904.11694 semantics, natural-language understanding, and first-person
79. Young H, Bastani O, Naik M (2019) Learning neurosymbolic cognition. The Turing test: the elusive standard of artificial
generative models via program synthesis. In: International intelligence, 161–184
conference on machine learning. PMLR, pp 7144–7153 99. Bader S, Hitzler P, Hölldobler S (2004) The integration of
80. Daniele A, Serafini L (2019) Knowledge enhanced neural net- connectionism and first-order knowledge representation and
works. In: PRICAI 2019: trends in artificial intelligence: 16th reasoning as a challenge for artificial intelligence. arXiv preprint
Pacific Rim international conference on artificial intelligence, cs/0408069
Cuvu, Yanuca Island, Fiji, August 26–30, 2019, Proceedings, 100. Pugeda TGS III (2005) Artificial intelligence and ethical
Part I 16. Springer, pp 542–554 reflections from the catholic church. Intelligence 26(4):53
81. Bosselut A, Rashkin H, Sap M, Malaviya C, Celikyilmaz A, 101. Ray O, Garcez AS (2006) Towards the integration of abduction
Choi Y (2019) Comet: Commonsense transformers for auto- and induction in artificial neural networks. In: Proceedings of
matic knowledge graph construction. arXiv preprint arXiv:1906. the ECAI, vol 6. Citeseer, pp 41–46
05317 102. Rawbone P, Paor P, Ware JA, Barrett J (2006) Interactive
82. Bosselut A, Le Bras R, Choi Y (2021) Dynamic neuro-symbolic causation: a neurosymbolic agent. In: IC-AI. Citeseer, pp 51–55
knowledge graph construction for zero-shot commonsense 103. Velik R, Bruckner D (2008) euro-symbolic networks: intro-
question answering. In: Proceedings of the AAAI conference on duction to a new information processing principle. In: 2008 6th
artificial intelligence, vol 35, pp 4923–4931 IEEE international conference on industrial informatics. IEEE,
83. Dang-Nhu R (2020) Plans: Neuro-symbolic program learning pp 1042–1047
from videos. Adv Neural Inf Process Syst 33:22445–22455 104. Kühnberger K-U, Gust H, Geibel P (2008) erspectives of neuro–
84. Amizadeh S, Palangi H, Polozov A, Huang Y, Koishida K symbolic integration–extended abstract–. In: Dagstuhl Seminar
(2020) Neuro-symbolic visual reasoning: Disentangling. In: Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik
International conference on machine learning. PMLR, 105. Kühnberger K-U, Geibel P, Gust H, Krumnack U, Ovchinnikova
pp 279–290 E, Schwering A, Wandmacher T (2008) Learning from incon-
85. Hewitt L, Le TA, Tenenbaum J (2020) Learning to learn gen- sistencies in an integrated cognitive architecture. Front Artif
erative programs with memoised wake-sleep. In: Conference on Intell Appl 171:212
uncertainty in artificial intelligence. PMLR, pp 1278–1287 106. Haikonen PO (2009) The role of associative processing in
86. Riegel R, Gray A, Luus F, Khan N, Makondo N, Akhalwaya IY, cognitive computing. Cogn Comput 1:42–49
Qian H, Fagin R, Barahona F, Sharma U, et al (2020) Logical 107. Prentzas J, Hatzilygeroudis I (2009) Combinations of case-based
neural networks. arXiv preprint arXiv:2006.13155 reasoning with other intelligent methods. Int J Hybrid Intell Syst
87. Sen P, Carvalho BW, Riegel R, Gray A (2022) Neuro-symbolic 6(4):189–209
inductive logic programming with logical neural networks. In: 108. Garcez AS (2010) eurons and symbols: a manifesto. In: Dag-
Proceedings of the AAAI conference on artificial intelligence, stuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum
vol. 36, pp 8212–8219 fÃ1=4r Informatik
88. Zimmer M, Feng X, Glanois C, Jiang Z, Zhang J, Weng P, Dong 109. Velik R (2010) Why machines cannot feel. Mind Mach
L, Jianye H, Wulong L (2021) Differentiable logic machines. 20(1):1–18
arXiv preprint arXiv:2102.11529 110. Bruckner D, Velik R, Penya Y (2011) Machine perception in
89. Arabshahi F, Lee J, Gawarecki M, Mazaitis K, Azaria A, automation: a call to arms. EURASIP J Embed Syst 2011:1–9
Mitchell T (2021) Conversational neuro-symbolic commonsense

123
12840 Neural Computing and Applications (2024) 36:12809–12844

111. POli R (2012) Discovery of symbolic, neuro-symbolic and 132. Alonso RS (2021) Deep symbolic learning and semantics for an
neural networks with parallel. In: Artificial neural nets and explainable and ethical artificial intelligence. In: Ambient
genetic algorithms: proceedings of the international conference intelligence–software and applications: 11th international sym-
in Norwich, UK, 1997. Springer, p 419 posium on ambient intelligence. Springer, pp 272–278
112. Velik R (2013) Brain-like artificial intelligence for automation– 133. Park K-W, Bu S-J, Cho S-B (2021) Evolutionary optimization of
foundations, concepts and implementation examples. BRAIN neuro-symbolic integration for phishing url detection. In: Hybrid
4(1–4):26–54 artificial intelligent systems: 16th international conference,
113. Achler T (2013) Neural networks that perform recognition using HAIS 2021, Bilbao, Spain, September 22–24, 2021, Proceedings
generative error may help fill the ‘‘neuro-symbolic gap’’. Biol 16. Springer, pp 88–100
Inspired Cogn Archit 3:6–12 134. Oltramari A, Francis J, Ilievski F, Ma K, Mirzaee R (2021)
114. Lima PM (2017) Q-satyrus: Mapping neuro-symbolic reasoning Generalizable neuro-symbolic systems for commonsense ques-
into an adiabatic quantum computer. In: NeSy tion answering, 294–310
115. Shen S, Ramesh S, Shinde S, Roychoudhury A, Saxena P (2018) 135. Calvaresi D, Ciatto G, Najjar A, Aydoğan R, Torre L, Omicini
Neuro-symbolic execution: The feasibility of an inductive A, Schumacher M (2021) Expectation: personalized explainable
approach to symbolic execution. arXiv preprint arXiv:1807. artificial intelligence for decentralized agents with heteroge-
00575 neous knowledge. In: Explainable and transparent AI and multi-
116. Lieto A, Lebiere C, Oltramari A (2018) The knowledge level in agent systems: third international workshop, EXTRAAMAS
cognitive architectures: current limitations and possible devel- 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers 3.
opments. Cogn Syst Res 48:39–55 Springer, pp 331–343
117. Wang P (2004) Toward a unified artificial intelligence. In: 136. Nye M, Tessler M, Tenenbaum J, Lake BM (2021) Improving
AAAI Technical Report (1), p 83 coherence and consistency in neural sequence models with dual-
118. Hammer P (2019) Adaptive neuro-symbolic network agent. system, neuro-symbolic reasoning. Adv Neural Inf Process Syst
Springer, Berlin, pp 80–90 34:25192–25204
119. Sittón I, Alonso RS, Hernández-Nieves E, Rodrı́guez-Gonzalez 137. Gaur M, Gunaratna K, Bhatt S, Sheth A (2022) Knowledge-
S, Rivas A (2019) Neuro-symbolic hybrid systems for industry infused learning: a sweet spot in neuro-symbolic ai. IEEE
4.0: a systematic mapping study. In: Knowledge management in Internet Comput 26(4):5–11
organizations: 14th international conference, KMO 2019, 138. Samsonovich AV (2022) One possibility of a neuro-symbolic
Zamora, Spain, July 15–18, 2019, Proceedings 14. Springer, integration. In: Biologically inspired cognitive architectures
pp 455–465 2021: proceedings of the 12th annual meeting of the BICA
120. Marcus G (2020) The next decade in ai: four steps towards Society. Springer, pp 428–437
robust artificial intelligence. arXiv preprint arXiv:2002.06177 139. Dold D, Soler Garrido J, Caceres Chian V, Hildebrandt M,
121. Hameed HA (2020) Artificial intelligence: What it was, and Runkler T (2022) Neuro-symbolic computing with spiking
what it should be? Int J Adv Comput Sci Appl 11(6) neural networks. In: Proceedings of the international conference
122. Belle V (2020) Symbolic logic meets machine learning: a brief on neuromorphic systems 2022, pp 1–4
survey in infinite domains. In: Scalable uncertainty manage- 140. Chitnis R, Silver T, Tenenbaum JB, Lozano-Perez T, Kaelbling
ment: 14th international conference, SUM 2020, Bozen-Bol- LP (2022) Learning neuro-symbolic relational transition models
zano, Italy, September 23–25, 2020, Proceedings 14. Springer, for bilevel planning. In: 2022 IEEE/RSJ international confer-
pp 3–16 ence on intelligent robots and systems (IROS). IEEE,
123. Tiddi I (2020) Directions for explainable knowledge-enabled pp 4166–4173
systems. Knowledge Graphs for eXplainable Artificial intelli- 141. Kocoń J, Baran J, Gruza M, Janz A, Kajstura M, Kazienko P,
gence: Foundations Applications and Challenges 47:245 Korczyński W, Miłkowski P, Piasecki M, Szołomicka J (2022)
124. Hanson D, Imran A, Vellanki A, Kanagaraj S (2020) A neuro- Neuro-symbolic models for sentiment analysis. In: Computa-
symbolic humanlike arm controller for sophia the robot. arXiv tional science–ICCS 2022: 22nd international conference, Lon-
preprint arXiv:2010.13983 don, UK, June 21–23, 2022, Proceedings, Part II. Springer,
125. Franklin NT, Norman KA, Ranganath C, Zacks JM, Gershman pp 667–681
SJ (2020) Structured event memory: a neuro-symbolic model of 142. Alon U, Xu F, He J, Sengupta S, Roth D, Neubig G (2022)
event cognition. Psychol Rev 127(3):327 Neuro-symbolic language modeling with automaton-augmented
126. Di Maio P (2020) Neurosymbolic knowledge representation for retrieval. In: International conference on machine learning.
explainable and trustworthy ai PMLR, pp 468–485
127. Anderson G, Verma A, Dillig I, Chaudhuri S (2020) Neu- 143. Amado LR, Pereira RF, Meneguzzi FR (2023) Robust neuro-
rosymbolic reinforcement learning with formally verified symbolic goal and plan recognition. In: Proceedings of the 37th
exploration. Adv Neural Inf Process Syst 33:6172–6183 AAAI conference on artificial intelligence (AAAI), 2023,
128. Gaur M, Kursuncu U, Sheth A, Wickramarachchi R, Yadav S Estados Unidos
(2020) Knowledge-infused deep learning. In: Proceedings of the 144. Hitzler P, Roth-Berghofer T, Rudolph S (2007) Foundations of
31st ACM conference on hypertext and social media, artificial intelligence faint-07 workshop at ki 2007. In: Work-
pp 309–310 shop at KI, vol 2007. Citeseer
129. Santoro A, Lampinen A, Mathewson K, Lillicrap T, Raposo D 145. Garcez AS, Lamb LC, Gabbay DM (2008) Neural-symbolic
(2021) Symbolic behaviour in artificial intelligence. arXiv pre- cognitive reasoning
print arXiv:2102.03406 146. Komendantskaya E, Broda K, Garcez ASd (2010) Neuro-sym-
130. Ebrahimi M, Eberhart A, Bianchi F, Hitzler P (2021) Towards bolic representation of logic programs defining infinite sets.
bridging the neuro-symbolic gap: deep deductive reasoners. ICANN (1) 6352:301–304
Appl Intell 51:6326–6348 147. Andreasik J, Ciebiera A, Umpirowicz S, Speretta M, Gauch S,
131. Susskind Z, Arden B, John LK, Stockton P, John EB (2021) Lakkaraju P, Alessandrelli D, Pagano P, Nastasi C, Petracca M
Neuro-symbolic ai: An emerging class of ai workloads and their et al (2010) Hsi 2010 conference programme may 13
characterization. arXiv preprint arXiv:2109.06133 148. Barcelona CS, Garcez Ad, Lamb L Seventh international
workshop on neural-symbolic learning and reasoning

123
Neural Computing and Applications (2024) 36:12809–12844 12841

149. Hatzilygeroudis I, Prentzas J (2011) Combinations of intelligent 169. Kautz H (2022) The third ai summer: Aaai Robert S. Engelmore
methods and applications. Springer, Berlin memorial lecture. AI Mag 43(1):105–125
150. Achler T (2012) Towards bridging the gap between pattern 170. Browne A, Sun R (2001) Connectionist inference models.
recognition and symbolic representation within neural networks. Neural Netw 14(10):1331–1355
In: Workshop on neural-symbolic learning and reasoning, 171. Cloete I, Zurada JM (2000) Knowledge-based neurocomputing
AAAI-2012. Citeseer 172. Hamilton K, Nayak A, Božić B, Longo L (2022) Is neuro-
151. Garcez A, Gori M, Hitzler P, Lamb LC (2015) Neural-symbolic symbolic ai meeting its promises in natural language process-
learning and reasoning (dagstuhl seminar 14381). In: Dagstuhl ing? a structured review. Semantic Web (Preprint), 1–42
Reports, vol. 4. Schloss Dagstuhl-Leibniz-Zentrum fuer 173. Yu D, Yang B, Liu D, Wang H, Pan S (2023) A survey on
Informatik neural-symbolic learning systems. Neural Netw
152. Hatzilygeroudis I, Palade V (2016) 6thinternational workshop 174. Yang C, Chaudhuri S (2022) Safe neurosymbolic learning with
on combinations of intelligent methods and applications (cima differentiable symbolic execution. arXiv preprint arXiv:2203.
2016) 07671
153. Hatzilygeroudis I, Palade V, Prentzas J (2017) Advances in 175. Shah A, Zhan E, Sun J, Verma A, Yue Y, Chaudhuri S (2020)
combining intelligent methods Learning differentiable programs with admissible neural
154. Hatzilygeroudis I, Palade V (2018) Advances in hybridization of heuristics. Adv Neural Inf Process Syst 33:4940–4952
intelligent methods 176. Barbin A, Cerutti F, Gerevini AE (2022) Addressing the symbol
155. Hammer P, Agrawal P, Goertzel B, Iklé M (2019) Artificial grounding problem with constraints in neuro-symbolic planning
general intelligence: 12th international conference, AGI 2019, 177. Zellers R, Holtzman A, Peters M, Mottaghi R, Kembhavi A,
Shenzhen, China, August 6–9, 2019, Proceedings, vol 11654. Farhadi A, Choi Y (2021) Piglet: language grounding through
Springer neuro-symbolic interaction in a 3d world. arXiv preprint arXiv:
156. Shen S, Shinde S, Ramesh S, Roychoudhury A, Saxena P (2019) 2106.00188
Neuro-symbolic execution: Augmenting symbolic execution 178. Borghesani V, Piazza M (2017) The neuro-cognitive represen-
with neural constraints. In: NDSS tations of symbols: the case of concrete words. Neuropsy-
157. Averkin A (2019) Hybrid intelligent systems based on fuzzy chologia 105:4–17
logic and deep learning. Artificial Intelligence: 5th RAAI 179. Mao J, Gan C, Kohli P, Tenenbaum JB, Wu J (2019) The neuro-
Summer School, Dolgoprudny, Russia, July 4–7, 2019, Tutorial symbolic concept learner: Interpreting scenes, words, and sen-
Lectures, 3–12 tences from natural supervision. arXiv preprint arXiv:1904.
158. Pisano G, Ciatto G, Calegari R, Omicini A (2020) Neuro-sym- 12584
bolic computation for xai: Towards a unified model. In: WOA, 180. Cunnington D, Law M, Lobo J, Russo A (2024) The role of
vol 1613, p 101 foundation models in neuro-symbolic learning and reasoning.
159. Alam M, Groth P, Hitzler P, Paulheim H, Sack H, Tresp V arXiv preprint arXiv:2402.01889
(2020) Cssa’20: workshop on combining symbolic and sub- 181. De Mántaras RL (1991) A distance-based attribute selection
symbolic methods and their applications. In: Proceedings of the measure for decision tree induction. Mach Learn 6:81–92
29th ACM international conference on information & knowl- 182. Valiant LG (1984) Deductive learning. Philos Trans R Soc Lond
edge management, pp 3523–3524 Ser A Math Phys Sci 312(1522):441–446
160. Benzmüller C, Lomfeld B (2020) Reasonable machines: a 183. Tiddi I, Schlobach S (2022) Knowledge graphs as tools for
research manifesto. In: KI 2020: advances in artificial intelli- explainable machine learning: a survey. Artif Intell 302:103627
gence: 43rd German conference on AI, Bamberg, Germany, 184. Arulkumaran K, Deisenroth MP, Brundage M, Bharath AA
September 21–25, 2020, Proceedings 43. Springer, pp 251–258 (2017) Deep reinforcement learning: a brief survey. IEEE Signal
161. Ilkou E, Koutraki M (2020) Symbolic vs sub-symbolic ai Process Mag 34(6):26–38
methods: Friends or enemies? In: CIKM (Workshops) 185. Sutton RS, Barto AG (2018) Reinforcement learning: an
162. Singh G, Mondal S, Bhatia S, Mutharaju R (2021) Neuro- introduction
symbolic techniques for description logic reasoning (student 186. Sætre AS, Ven A (2021) Generating theory by abduction. Acad
abstract). In: Proceedings of the AAAI conference on artificial Manag Rev 46(4):684–701
intelligence, vol 35, pp 15891–15892 187. Al-Ajlan A (2015) The comparison between forward and
163. Branco R, Branco A, Silva JM, Rodrigues J (2021) Common- backward chaining. Int J Mach Learn Comput 5(2):106
sense reasoning: how do neuro-symbolic and neuro-only 188. Weber L, Minervini P, Münchmeyer J, Leser U, Rocktäschel T
approaches compare? In: CIKM Workshops (2019) Nlprolog: reasoning with weak unification for question
164. Basu K, Murugesan K, Atzeni M, Kapanipathi P, Talamadupula answering in natural language. arXiv preprint arXiv:1906.06187
K, Klinger T, Campbell M, Sachan M, Gupta G (2021) A hybrid 189. Zhang B, Zhu J, Su H (2023) Toward the third generation
neuro-symbolic approach for text-based games using inductive artificial intelligence. Sci China Inf Sci 66(2):1–19
logic programming. Combining learning and reasoning: pro- 190. SKahneman D (2013) Thinking, fast and slow
gramming languages, formalisms, and representations 191. Kapanipathi P, Abdelaziz I, Ravishankar S, Roukos S, Gray A,
165. Garcez Ad, Jiménez-Ruiz E (2021) Neural-symbolic learning Astudillo R, Chang M, Cornelio C, Dana S, Fokoue A, et al
and reasoning (nesy) (2020) Leveraging abstract meaning representation for knowl-
166. Saha A, Joty S, Hoi SC (2022) Weakly supervised neuro-sym- edge base question answering. arXiv preprint arXiv:2012.01707
bolic module networks for numerical reasoning over text. In: 192. Huang J, Li Z, Chen B, Samel K, Naik M, Song L, Si X (2021)
Proceedings of the AAAI conference on artificial intelligence, Scallop: From probabilistic deductive databases to scalable
vol 36, pp 11238–11247 differentiable reasoning. Adv Neural Inf Process Syst
167. Ahmed K, Teso S, Chang K-W, Broeck G, Vergari A (2022) 34:25134–25145
Semantic probabilistic layers for neuro-symbolic learning. Adv 193. Smullyan RM (1995) First-order logic
Neural Inf Process Syst 35:29944–29959 194. Andrews PB (2013) An introduction to mathematical logic and
168. Bader S, Hitzler P (2005) Dimensions of neural-symbolic inte- type theory: to truth through proof, vol 27
gration—a structured survey. arXiv preprint arXiv:cs/0511042 195. Garcez Ad, Bader S, Bowman H, Lamb LC, Penning L, Illu-
minoo B, Poon H, Zaverucha CG (2022) Neural-symbolic

123
12842 Neural Computing and Applications (2024) 36:12809–12844

learning and reasoning: a survey and interpretation. Neuro- 215. Fdez-Riverola F, Corchado JM (2003) Fsfrt: Forecasting system
Symb Artif Intell State Art 342(1):327 for red tides: a hybrid autonomous ai model. Appl Artif Intell
196. Ehrlinger L, Wöß W (2016) Towards a definition of knowledge 17(10):955–982
graphs. SEMANTiCS (Posters, Demos, SuCCESS) 48(1–4):2 216. Policastro CA, Carvalho AC, Delbem AC (2003) Hybrid
197. Ji S, Pan S, Cambria E, Marttinen P, Philip SY (2021) A survey approaches for case retrieval and adaptation. In: KI 2003:
on knowledge graphs: representation, acquisition, and applica- Advances in Artificial Intelligence: 26th Annual German Con-
tions. IEEE Trans Neural Netw Learn Syst 33(2):494–514 ference on AI, KI 2003, Hamburg, Germany, September 15-18,
198. Sun R (2002) Hybrid systems and connectionist implementa- 2003. Proceedings 26. Springer, pp 297–311
tionalism. Encyclop Cogn Sci 1:697–703 217. Fernández-Riverola F, Corchado JM (2004) Employing tsk
199. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient esti- fuzzy models to automate the revision stage of a cbr system. In:
mation of word representations in vector space. arXiv preprint Current topics in artificial intelligence: 10th conference of the
arXiv:1301.3781 Spanish association for artificial intelligence, CAEPIA 2003,
200. Pennington J, Socher R, Manning CD (2014) Glove: Global and 5th Conference on Technology Transfer, TTIA 2003, San
vectors for word representation. In: Proceedings of the 2014 Sebastian, Spain, November 12-14, 2003. Revised Selected
conference on empirical methods in natural language processing Papers. Springer, pp 302–311
(EMNLP), pp 1532–1543 218. Corchado JM, Borrajo ML, Pellicer MA, Yáñez JC (2005)
201. Burattini E, De Gregorio M, Tamburrin G (1999) Pictorial and Neuro-symbolic system for business internal control. In:
verbal components in artificial intelligence explanations. In: Advances in data mining: applications in image mining, medi-
Vision: the approach of biophysics and neurosciences: pro- cine and biotechnology, management and environmental con-
ceedings of the international school of biophysics, Casamicciola, trol, and telecommunications; 4th industrial conference on data
Napoli, Italy, 11-16 October 1999, vol 11, p 471 mining, ICDM 2004, Leipzig, Germany, July 4-7, 2004, Revised
202. Hitzler P, Seda AK (2003) Continuity of semantic operators in Selected Papers 4. Springer, pp 1–10
logic programming and their approximation by artificial neural 219. Prentzas J, Hatzilygeroudis I, Michail O (2008) Improving the
networks. In: KI 2003: advances in artificial intelligence: 26th accuracy of neuro-symbolic rules with case-based reasoning. In:
annual German conference on AI, KI 2003, Hamburg, Germany, Proceedings of the first international workshop on combinations
September 15-18, 2003. Proceedings 26. Springer, pp 355–369 of intelligent methods and applications in conjunction with 18th
203. Coraggio P, De Gregorio M, Forastiere M (2008) Robot navi- European conference on artificial intelligence, pp 49–54
gation based on neurosymbolic reasoning over landmarks. Int J 220. Newman CBD (1998) Uci repository of machine learning
Pattern Recognit Artif Intell 22(05):1001–1014 databases. http://www.ics.uci.edu/mlearn/MLRepository.html
204. Staffa M, Rossi S, De Gregorio M, Burattini E (2011) Thresh- 221. Borrajo ML, Laza R, Corchado JM (2008) A complex case-
olds tuning of a neuro-symbolic net controlling a behavior-based based advisor. Appl Artif Intell 22(5):377–406
robotic system. In: ESANN 222. Prentzas J, Hatzilygeroudis I (2011) Case-based reasoning
205. Price KV (2013) Differential evolution. Handbook of Opti- integrations: Approaches and applications. Case-based reason-
mization: From Classical to Modern Approach, 187–214 ing: processes, suitability and applications, 1–28
206. Hasoon SO, Jasim YA (2013) Diagnosis windows problems 223. Hatzilygeroudis I, Prentzas J (2013) Fuzzy and neuro-symbolic
based on hybrid intelligence systems. J Eng Sci Technol approaches in personal credit scoring: assessment of bank loan
8(5):566–578 applicants. In: Innovations in Intelligent Machines-4, p 319
207. Golovko V, Kroshchanka A, Kovalev M, Taberko V, Ivaniuk D 224. Bach J, Herger P (2015) Request confirmation networks for
(2020) Neuro-symbolic artificial intelligence: application for neuro-symbolic script execution. In: CoCo@ NIPS
control the quality of product labeling. In: Open semantic 225. Bologna G, Hayashi Y (2017) Characterization of symbolic
technologies for intelligent system: 10th international confer- rules embedded in deep dimlp networks: a challenge to trans-
ence, OSTIS 2020, Minsk, Belarus, February 19–22, 2020, parency of deep learning. J Artif Intell Soft Comput Res
Revised Selected Papers. Springer, pp 81–101 7(4):265–286
208. Wang F-Y, Zhang JJ, Zheng X, Wang X, Yuan Y, Dai X, Zhang 226. Kraetzschmar G, Sablatnög S, Enderle S, Palm G (2000)
J, Yang L (2016) Where does alphago go: from church-turing Application of neurosymbolic integration for environment
thesis to alphago thesis and beyond. IEEE/CAA J Autom Sin modelling in mobile robots. In: Hybrid neural systems. Springer,
3(2):113–120 pp 387–401
209. Świechowski M, Godlewski K, Sawicki B, Mańdziuk J (2023) 227. Burattini E, Coraggio P, De Gregorio M, Ripa B (2003) Agent
Monte Carlo tree search: a review of recent modifications and wisard: go and catch that image. In: Proc. First IAPR TC3
applications. Artif Intell Rev 56(3):2497–2562 Workshop, Florence, Italy, vol 89, p 95
210. Ultsch A (2000) The neuro-data-mine. In: Symposia on neural 228. Grieco BP, Lima PM, De Gregorio M, França FM (2010) Pro-
computation (NC’2000), Berlin, Germany ducing pattern examples from ‘‘mental’’ images. Neurocom-
211. Corchado JM, Lees B (2001) Adaptation of cases for case based puting 73(7–9):1057–1064
forecasting with neural network support. In: Soft computing in 229. Coraggio P, De Gregorio M (2007) A neurosymbolic hybrid
case based reasoning, pp 293–319 approach for landmark recognition and robot localization. In:
212. Fdez-Riverola F, Corchado JM, Torres JM (2002) Neuro-sym- Advances in brain, vision, and artificial intelligence: second
bolic system for forecasting red tides. In: Artificial intelligence international symposium, BVAI 2007, Naples, Italy, October
and cognitive science: 13th Irish conference, AICS 2002 Lim- 10-12, 2007. Proceedings 2. Springer, pp 566–575
erick, Ireland, September 12–13, 2002 Proceedings. Springer, 230. De Gregorio M (2008) An intelligent active video surveillance
pp 45–52 system based on the integration of virtual neural sensors and bdi
213. Neagu C-D, Avouris N, Kalapanidas E, Palade V (2002) Neural agents. IEICE Trans Inf Syst 91(7):1914–1921
and neuro-fuzzy integration in a knowledge-based system for air 231. Qadeer N, Velik R, Zucker G, Boley H (2009) Knowledge
quality prediction. Appl Intell 17(2):141 representation for a neuro-symbolic network in home care risk
214. Corchado Rodrı́guez JM, Aiken J, Rees N et al (2003) Neuro- identification. In: 2009 7th IEEE international conference on
symbolic reasoning system for modeling complex behaviours industrial informatics. IEEE, pp 277–282

123
Neural Computing and Applications (2024) 36:12809–12844 12843

232. Dietrich D, Bruckner D, Zucker G, Muller B, Tmej A (2009) 252. Perrier M, Kalwa J (2005) Intelligent diagnosis for autonomous
Psychoanalytical model for automation and robotics. In: underwater vehicles using a neuro-symbolic system in a dis-
AFRICON 2009. IEEE, pp 1–8 tributed architecture. In: Europe Oceans 2005, vol 1. IEEE,
233. Barbosa R, Cardoso DO, Carvalho D, França FM (2017) A pp 350–355
neuro-symbolic approach to gps trajectory classification. 253. Sánchez VGC, Villegas OOV, Salgado GR, Dominguez H
ESANN (2008) Quality inspection of textile artificial textures using a
234. Barbosa R, Cardoso DO, Carvalho D, Franca FM (2018) neuro-symbolic hybrid system methodology. WSEAS Trans
Weightless neuro-symbolic gps trajectory classification. Neu- Comput 12:1899–1905
rocomputing 298:100–108 254. Velik R, Boley H (2010) Neurosymbolic alerting rules. IEEE
235. Yi K, Wu J, Gan C, Torralba A, Kohli P, Tenenbaum J (2018) Trans Ind Electron 57(11):3661–3668
Neural-symbolic vqa: Disentangling reasoning from vision and 255. Komendantskaya E, Zhang Q (2011) Sherlock-a neural network
language understanding. In: Advances in neural information software for automated problem solving. In: Proceedings of
processing systems, vol 31 seventh international workshop on neural-symbolic learning and
236. Lavrac N, Dzeroski S (1994) Inductive logic programming. In: reasoning
WLP. Springer, pp 146–160 256. Saikia S, Vig L, Srinivasan A, Shroff G, Agarwal P, Rawat R
237. Hatzilygeroudis I, Prentzas J (2000) Neurules: improving the (2016) Neuro-symbolic eda-based optimisation using ilp-en-
performance of symbolic rules. Int J Artif Intell Tools hanced dbns. arXiv preprint arXiv:1612.06528
9(01):113–130 257. Kursuncu U, Gaur M, Sheth A (2019) Knowledge infused
238. Osório F, Amy B, Cechin A (2001) Hybrid machine learning learning (k-il): Towards deep incorporation of knowledge in
tools: Inss-a neuro-symbolic system for constructive machine deep learning. arXiv preprint arXiv:1912.00512
learning. Deep fusion of computational and symbolic process- 258. Khan MJ, Curry E (2020) Neuro-symbolic visual reasoning for
ing, 121–144 multimedia event processing: Overview, prospects and chal-
239. Garcez Ad, Broda K, Gabbay DM (2001) Symbolic knowledge lenges. In: CIKM (Workshops)
extraction from trained neural networks: a sound approach. Artif 259. Kapanipathi P, Abdelaziz I, Ravishankar S, Roukos S, Gray A,
Intell 125(1–2):155–207 Astudillo R, Chang M, Cornelio C, Dana S, Fokoue A, et al
240. Prentzas J, Hatzilygeroudis I, Garofalakis J (2002) A web-based (2020) Question answering over knowledge bases by leveraging
intelligent tutoring system using hybrid rules as its representa- semantic parsing and neuro-symbolic reasoning. arXiv preprint
tional basis. In: Intelligent tutoring systems: 6th international arXiv:2012.01707
conference, ITS 2002 Biarritz, France and San Sebastian, Spain, 260. Yang Z, Ishay A, Lee J (2020) Neurasp: embracing neural
June 2–7, 2002 Proceedings 6. Springer, pp 119–128 networks into answer set programming. In: 29th international
241. Salgado GR, Amy B (2003) Neuro-symbolic hybrid system for joint conference on artificial intelligence (IJCAI 2020)
treatment of gradual rules. Neural Information Processing— 261. Siyaev A, Jo G-S (2021) Neuro-symbolic speech understanding
Letters and Reviews 1(2) in aircraft maintenance metaverse. IEEE Access
242. Prentzas N, Nicolaides A, Kyriacou E, Kakas A, Pattichis C 9:154484–154499
(2019) Integrating machine learning with symbolic reasoning to 262. Stammer W, Schramowski P, Kersting K (2021) Right for the
build an explainable ai model for stroke prediction. In: 2019 right concept: revising neuro-symbolic concepts by interacting
IEEE 19th international conference on bioinformatics and bio- with their explanations. In: Proceedings of the IEEE/CVF con-
engineering (BIBE). IEEE, pp 817–821 ference on computer vision and pattern recognition,
243. Thrun SB, Bala JW, Bloedorn E, Bratko I, Cestnik B, Cheng J, pp 3619–3629
De Jong KA, Dzeroski S, Fisher DH, Fahlman SE, et al (1991) 263. Kimura D, Ono M, Chaudhury S, Kohita R, Wachi A, Agravante
The monk’s problems: A performance comparison of different DJ, Tatsubori M, Munawar A, Gray A (2021) Neuro-symbolic
learning algorithms. Technical report reinforcement learning with first-order logic. arXiv preprint
244. Zhou J, Cui G, Hu S, Zhang Z, Yang C, Liu Z, Wang L, Li C, arXiv:2110.10963
Sun M (2020) Graph neural networks: a review of methods and 264. Evans R, Bošnjak M, Buesing L, Ellis K, Pfau D, Kohli P,
applications. AI Open 1:57–81 Sergot M (2021) Making sense of raw input. Artif Intell
245. Omlin CW, Snyders S (2003) Inductive bias strength in 299:103521
knowledge-based neural networks: application to magnetic res- 265. Mitchener L, Tuckey D, Crosby M, Russo A (2022) Detect,
onance spectroscopy of breast tissues. Artif Intell Med understand, act: a neuro-symbolic hierarchical reinforcement
28(2):121–140 learning framework. Mach Learn 111(4):1523–1549
246. Bologna G (2003) A model for single and multiple knowledge 266. Alshahrani M, Khan MA, Maddouri O, Kinjo AR, Queralt-
based networks. Artif Intell Med 28(2):141–163 Rosinach N, Hoehndorf R (2017) Neuro-symbolic representa-
247. Obot OU, Uzoka F-ME (2009) A framework for application of tion learning on biological knowledge graphs. Bioinformatics
neuro-case-rule base hybridization in medical diagnosis. Appl 33(17):2723–2730
Soft Comput 9(1):245–253 267. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: Online
248. Boulahia J, Smirani L, KSA MA (2015) Experiments of a neuro learning of social representations. In: Proceedings of the 20th
symbolic hybrid learning system with incomplete data ACM SIGKDD international conference on knowledge discov-
249. Ghosh J, Taha I (2018) A neuro-symbolic hybrid intelligent ery and data mining, pp 701–710
architecture with. In: Recent advances in artificial neural net- 268. Agibetov A, Samwald M (2018) Fast and scalable learning of
works, 1 neuro-symbolic representations of biomedical knowledge. arXiv
250. Bhatia S, Kohli P, Singh R (2018) Neuro-symbolic program preprint arXiv:1804.11105
corrector for introductory programming assignments. In: Pro- 269. Wu L, Fisch A, Chopra S, Adams K, Bordes A, Weston J (2018)
ceedings of the 40th international conference on software Starspace: Embed all the things! In: Proceedings of the AAAI
engineering, pp 60–70 conference on artificial intelligence, vol 32
251. Souici-Meslati L, Sellami M (2004) A hybrid approach for 270. Bianchi F, Palmonari M, Hitzler P, Serafini L (2019) Comple-
arabic literal amounts recognition. Arab J Sci Eng 29 menting logical reasoning with sub-symbolic commonsense. In:
Rules and reasoning: third international joint conference,

123
12844 Neural Computing and Applications (2024) 36:12809–12844

RuleML? RR 2019, Bolzano, Italy, September 16–19, 2019, 284. Yin P, Neubig G (2017) A syntactic neural model for general-
Proceedings 3. Springer, pp 161–170 purpose code generation. arXiv preprint arXiv:1704.01696
271. Bianchi F, Palmonari M, Nozza D (2018) Towards encoding 285. Ritchie D, Guerrero P, Jones RK, Mitra NJ, Schulz A, Willis
time in text-based entity embeddings. In: The semantic web– KD, Wu J (2023) Neurosymbolic models for computer graphics.
ISWC 2018: 17th international semantic web conference, In: Computer graphics forum, vol 42. Wiley Online Library,
Monterey, CA, USA, October 8–12, 2018, Proceedings, Part I pp 545–568
17. Springer, pp 56–71 286. Reddy AG, Balasubramanian VN (2022) Estimating treatment
272. Oltramari A, Francis J, Henson C, Ma K, Wickramarachchi R effects using neurosymbolic program synthesis. arXiv preprint
(2020) Neuro-symbolic architectures for context understanding. arXiv:2211.04370
arXiv preprint arXiv:2003.04707 287. Li Z, Huang J, Naik M (2023) Scallop: A language for neu-
273. Singh P, Lin T, Mueller ET, Lim G, Perkins T, Li Zhu W (2002) rosymbolic programming. Proceedings of the ACM on Pro-
Open mind common sense: knowledge acquisition from the gramming Languages 7(PLDI):1463–1487
general public. In: On the move to meaningful internet systems 288. Varela FA (2022) The effects of hybrid neural networks on
2002: CoopIS, DOA, and ODBASE: confederated international meta-learning objectives. PhD thesis
conferences CoopIS, DOA, and ODBASE 2002 Proceedings. 289. Mundhenk TN, Landajuela M, Glatt R, Santiago CP, Faissol
Springer, pp 1223–1237 DM, Petersen BK (2021) Symbolic regression via neural-guided
274. Wang Q, Mao Z, Wang B, Guo L (2017) Knowledge graph genetic programming population seeding. arXiv preprint arXiv:
embedding: a survey of approaches and applications. IEEE 2111.00053
Trans Knowl Data Eng 29(12):2724–2743 290. Chen X, Liang C, Huang D, Real E, Wang K, Pham H, Dong X,
275. Doldy D, Garridoy JS (2021) An energy-based model for neuro- Luong T, Hsieh C-J, Lu Y et al (2024) Symbolic discovery of
symbolic reasoning on knowledge graphs. In: 2021 20th IEEE optimization algorithms. In: Advances in neural information
international conference on machine learning and applications processing systems, vol 36
(ICMLA). IEEE, pp 916–921 291. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016)
276. Nickel M, Tresp V, Kriegel H-P (2011) A three-way model for The ethics of algorithms: mapping the debate. Big Data Soc
collective learning on multi-relational data. In: Icml, vol 11, 3(2):2053951716679679
pp 3104482–3104584 292. Rudin C (2019) Stop explaining black box machine learning
277. Sun K, Rayudu H, Pujara J (2021) A hybrid probabilistic models for high stakes decisions and use interpretable models
approach for table understanding. In: Proceedings of the AAAI instead. Nature Mach Intell 1(5):206–215
conference on artificial intelligence, vol 35, pp 4366–4374 293. Kazim E, Denny DMT, Koshiyama A (2021) Ai auditing and
278. Kimmig A, Bach S, Broecheler M, Huang B, Getoor L (2012) A impact assessment: according to the UK information commis-
short introduction to probabilistic soft logic. In: Proceedings of sioner’s office. AI Ethics 1:301–310
the NIPS workshop on probabilistic programming: foundations 294. Jobin A, Ienca M, Vayena E (2019) The global landscape of ai
and applications, pp 1–4 ethics guidelines. Nat Mach Intell 1(9):389–399
279. Gol MG, Pujara J, Szekely P (2019) Tabular cell classification 295. Tamang MD, Shukla VK, Anwar S, Punhani R (2021)
using pre-trained cell embeddings. In: 2019 IEEE international Improving business intelligence through machine learning
conference on data mining (ICDM). IEEE, pp 230–239 algorithms. In: 2021 2nd International conference on intelligent
280. Ding M, Chen Z, Du T, Luo P, Tenenbaum J, Gan C (2021) engineering and management (ICIEM). IEEE, pp 63–68
Dynamic visual reasoning by learning differentiable physics
models from video and language. Adv Neural Inf Process Syst Publisher’s Note Springer Nature remains neutral with regard to
34:887–899 jurisdictional claims in published maps and institutional affiliations.
281. Ma K, Francis J, Lu Q, Nyberg E, Oltramari A (2019) Towards
generalizable neuro-symbolic systems for commonsense ques-
Springer Nature or its licensor (e.g. a society or other partner) holds
tion answering. arXiv preprint arXiv:1910.14087
exclusive rights to this article under a publishing agreement with the
282. Sundar LKS, Muzik O, Buvat I, Bidaut L, Beyer T (2021)
author(s) or other rightsholder(s); author self-archiving of the
Potentials and caveats of ai in hybrid imaging. Methods
accepted manuscript version of this article is solely governed by the
188:4–19
terms of such publishing agreement and applicable law.
283. Kang T, Turfah A, Kim J, Perotte A, Weng C (2021) A neuro-
symbolic method for understanding free-text medical evidence.
J Am Med Inform Assoc 28(8):1703–1711

123

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy