DeepAI
Log In Sign Up

Deep Algorithmic Question Answering: Towards a Compositionally Hybrid AI for Algorithmic Reasoning

09/16/2021
by   Kwwabena Nuamah, et al.
0

An important aspect of artificial intelligence (AI) is the ability to reason in a step-by-step "algorithmic" manner that can be inspected and verified for its correctness. This is especially important in the domain of question answering (QA). We argue that the challenge of algorithmic reasoning in QA can be effectively tackled with a "systems" approach to AI which features a hybrid use of symbolic and sub-symbolic methods including deep neural networks. Additionally, we argue that while neural network models with end-to-end training pipelines perform well in narrow applications such as image classification and language modelling, they cannot, on their own, successfully perform algorithmic reasoning, especially if the task spans multiple domains. We discuss a few notable exceptions and point out how they are still limited when the QA problem is widened to include other intelligence-requiring tasks. However, deep learning, and machine learning in general, do play important roles as components in the reasoning process. We propose an approach to algorithm reasoning for QA, Deep Algorithmic Question Answering (DAQA), based on three desirable properties: interpretability, generalizability and robustness which such an AI system should possess and conclude that they are best achieved with a combination of hybrid and compositional AI.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/19/2021

Geographic Question Answering: Challenges, Uniqueness, Classification, and Future Directions

As an important part of Artificial Intelligence (AI), Question Answering...
03/06/2020

Natural Language QA Approaches using Reasoning with External Knowledge

Question answering (QA) in natural language (NL) has been an important a...
10/21/2019

Domain-agnostic Question-Answering with Adversarial Training

Adapting models to new domain without finetuning is a challenging proble...
01/25/2020

Intent Classification in Question-Answering Using LSTM Architectures

Question-answering (QA) is certainly the best known and probably also on...
09/13/2020

Receptivity of an AI Cognitive Assistant by the Radiology Community: A Report on Data Collected at RSNA

Due to advances in machine learning and artificial intelligence (AI), a ...
07/10/2017

Learning Visual Reasoning Without Strong Priors

Achieving artificial visual reasoning - the ability to answer image-rela...
06/30/2022

Learning Iterative Reasoning through Energy Minimization

Deep learning has excelled on complex pattern recognition tasks such as ...

1 Introduction

Algorithms form the basis of problem solving and are, therefore, critical to any attempts to emulate human-like reasoning in AI. Algorithmic reasoning, as defined in [33], allows us to automate and engineer systems that reason. An interesting domain in which to apply and evaluate such AI capabilities is that of question answering, and in particular, open domain QA. Some of the early techniques in QA, e.g. [15], focused on reasoning about problems in a purely logical manner. However, recent techniques have been aimed more at the challenges of constructing the right queries to retrieve answers from knowledge bases (or sources) (KBs) ([32], [10]

) as well as the construction of very large language models over a large number of documents from the web

[9].

Yet, many other tasks, such as the automatic selection of KBs and relevant knowledge, choice of inference algorithms, and how to combine them, are all important to fully automate the QA process. Several of these tasks are scoped out as engineering tasks which experts perform when deploying these AI systems (see figure 1). We argue that these scoped out tasks should be part of the AI models which are built for QA tasks, as they are key ingredients in the full automation of the QA process. Deep Algorithmic Question Answering focuses on these tasks as well as the traditional QA problem with the added task of tackling questions that require multiple steps of reasoning to solve.

We conclude that there is important to refine the scope of problems which AI for QA should solve by incorporating those tasks which, in the real-world applications, look messy and are often tackled by humans experts or data annotators. Further, tackling these problems highlights the need for AI approaches that can appropriately leverage both symbolic and sub-symbolic AI methods and also brings to the fore the need to have AI systems that are compositional in order to adapt seamlessly to different problem types.

In the sections that follow, we give some background to algorithmic reasoning, hybrid AI and compositionality, and then describe our proposed DAQA system.

2 Background

2.1 Algorithmic Reasoning

The task of algorithmic reasoning places emphasis on automating systems to reason about problems and programs following similar mechanisms to those which humans use when solving problems [20]. However, our interpretation of this task goes beyond the classic logical reasoning context to one where learning and reasoning are combined to tackle more complex and diverse problems. This includes, for instance, choosing which algorithms to use, when and how to combine them [33]. Its application to question answering means having an automated system which is deliberate in the selection of inference steps needed to answer a question such that the inference process forms a computational graph which represents an algorithm for solving the problem.

We claim that achieving this with a purely symbolic or DL approach is not practical, given the known limitations of symbolic methods and sub-symbolic methods [25]. There is a lot of work ongoing to reconcile these techniques (see section 2.2). However, many of these are either theory-focused, or at levels of abstraction that still makes it hard to tackle algorithmic reasoning in a practical problem domain such as QA.

This work is motivated in part by models proposed in [33]. However, we note that the authors make assumptions about the contexts in which algorithms are used, and limit the concept of algorithms to a narrow application of a single algorithm that is trained end-to-end. This paper extends that notion to include the automatic, appropriate and effective composition of algorithms to solve different kinds of problems in QA. Other related work such as [11],[23], [26], and [6] focus on the application of rules for decomposing problems in order to find answers, leading to inference processes or plans which are constructed dynamically. We extend some of these ideas in this work.

Implicitly, the expectation of algorithmic reasoning is that the process and the inferred answer can be inspected to verify the steps involved in answering a question. This is very different from the expectations one has when using deep neural network models to train end-to-end models where the process of completing the task is not interpretable.

In this paper we will focus on the QA problem, especially since QA is one of the longest standing applications of AI and since many other AI problems can be framed as QA problems. Why is it important to take such a high level perspective of QA and algorithmic reasoning, instead of one at the deeper level of knowledge/vector representation and semantics?

  • It puts into sharper focus how narrow many popular AI techniques are: e.g, language modelling, image classification, etc. ‘Narrow’ in the sense that the models excel at perception tasks for which lots of training data is available, and in the sense that they are restricted to those specific tasks and cannot be applied to other tasks ‘as-is’ without major changes to the models or how they used.

  • It highlights how unreasonable many of the assumptions in AI models are when applied to real-world problems. For instance, (1) the assumptions that the answer to a question will be from the same distribution of data which was used to train the model; (2) the assumption that the model always has access to all the data that it needs to answer a question such that the decision of choosing which KBs to use to answer a question is never a problem. In many real-world applications, the data sources are diverse and heterogeneous, noisy and incomplete.

  • It shows how many AI techniques fail to address some of the challenging problems that have to be tackled. For example, dealing with uncertainty, noisy and incomplete information from KBs, especially in the context of QA.

  • It shows how huge aspects of what we currently claim to be AI are heavily dependent on designs and inputs from humans and how much work needs to be done to solve simple tasks without human intervention. For instance, pre-defining which DL models are used to tackle a classification or prediction task. Although tasks such as feature engineering, which was predominantly an expert’s task have been replaced by better DL models, the human expertise has only shifted to tasks related to the choice of architecture of the neural network model, dataset selection and pre-processing for training, and the general engineering required to solve the specific task at hand.

  • It shows why a compositional and hybrid approach is needed given that many of the tasks cannot simply be handled with an end-to-end training of deep neural network models. We believe that a systems approach to AI is needed to tackle algorithmic reasoning in QA and agree with the claim that there is the need to find new ways to synthesize AI from a hybrid of symbolic methods and deep neural networks [25].

Figure 1: Diverse tasks that are part of the open-domain question answering process. However, most of the attention in work related to QA focus on the core AI tasks related to information retrieval, inference or prediction

2.2 Hybrid AI

Hybrid AI is concerned with the integration of symbolic (logical) and sub-symbolic (DL-based) AI methodologies into neuro-symbolic architectures. This is a rapidly growing field with diverse approaches being explored. We refer you to papers that survey these works including [4] and [2]. Additionally, Henry Kautz’s classification of the different types of neural-symbolic systems integrations are outlined in [21]. Our notion of hybrid AI is primarily inspired by DARPA’s ‘Third Wave of AI’ research focus [8], “where systems are capable of acquiring new knowledge through generative contextual and explanatory models”.

The strengths and shortcomings of both DL and symbolic AI paradigms are well documented. More recently in [3], some of the pioneers and advocates of DL for AI highlighted the need to address the limitations of DL in order to tackle some of the human-like reasoning capabilities. In particular, they mention DL’s current inability to perform deliberate systematic reasoning and planning as described by Kahneman’s ‘System 2’ reasoning [18].

Reconciling methodologies in distinct areas of learning and reasoning (e.g. statistics and logic) means combining the respective advantages while circumventing the shortcomings and limitations [4]

. Some of the approaches taken to reconcile symbolic and sub-symbolic reasoning include the following (not in any way exhaustive): creating a one-to-one correspondence between artificial neurons and elements of logical formulae

[29]

; using reinforcement learning with Monte-Carlo tree search to play Go in AlphaGo

[30]; combining deductive and inductive reasoning methods for question answering [28] [6],[27]

; neural networks with external memory in the Neural Turing Machine (NTM)

[14] and reinforcement learning NTM variants [37] to make them more expressive; memory networks [5][34]; probabilistic reasoning and program induction [24]. There is also a lot of interest in [7] for enhancing machine learning with knowledge representation and reasoning.

A common theme in most of the work exploring hybrid AI is the need for symbol manipulation on models of the world, while being able to leverage other sub-symbolic machinery to learn these models from examples or to predict actions based on the models. These capabilities are also essential for performing algorithm reasoning.

2.3 Compositionality

The space of algorithms and algorithmic reasoning is far too large and varied to construct a single neural network model to solve it in practical way. It is not always possible to program or train one AI system to solve diverse kinds of problems. In many cases, even the ability of an expert to engineer a system to solve a range of problems, such as that of open-domain QA, is limited by the fact that one cannot anticipate all the possible kinds of questions to answer and how to combine existing AI modules achieve it. Compositionality provides a mechanism to compose solutions to problems by automating the combining of existing AI modules to solve new and varied problems. In this work, we use “compositionality” in a loose sense to include the entire spectrum from the high level integration of distinct AI components and systems, through to automatic program composition, all the way to the deeper level integration of knowledge representation, semantics and neural embedding.

Different approaches can be used to build such compositional AI systems. We highlight a few below, but it is in no way an exhaustive list. [13] created an end-to-end trainable system, NEURAL TERPRET

, that learns to write interpretable algorithms with perceptual components, while the Neural Turing Machine

[14] extend neural networks with an external memory such that the network can infer simple programs such as copying and sorting. Some neural-symbolic methods provide compositionality by treating the symbolic and neural network modules both as black boxes and integrating them by exposing appropriate functions [31]. In a majority of cases, compositionality is achieved by mapping neural network modules onto the semantic parse tree of a natural language question or the generation of sequences of functions from the question text using a trained network [1] [36] [22][19][16][17]. The generated program is then executed to answer the question.

However, generating a neural network architecture from a semantic parse tree of natural language text is not enough to achieve algorithmic reasoning. This is because intermediate reasoning steps such has handling failure due to the lack of relevant data cannot be recovered from in a shallow parse tree without any further reasoning or inference steps. Sometimes, the data retrieved at one step during inference determines how the rest of the algorithm is developed. For instance, in a question such as “Which country in Europe will have the highest GDP growth rate by 2032”, the kind of data retrieved (or the lack thereof) will determine if retrieval is sufficient, or a more involving regression on past data for prediction will be needed. Hence, the automatic formulation of new algorithms using existing components requires one to look beyond the initial parse tree of the question and to work within the constraints of pre- and post-conditions of the underlying symbolic and sub-symbolic modules in order to combine them appropriately.

Figure 2: Going beyond the semantic parse tree of the question by applying additional decompositions based on rules or pre-trained models for predicting continuations of the inference plan.

3 Desiderata

Achieving the task of algorithmic reasoning in the domain of question answering requires us to have some expectation of what such a system should look like and how it should behave. We list three of these below, all of which introduce new challenges that, if solved, will advance the development of AI architectures for QA.

  • Interpretabiltiy: One of the basic requirements of a QA system with algorithmic reasoning capabilities is that its inner workings are interpretable and inspectable by a human user. Additionally, the intepretability allows a user to check if the pre- and post-conditions of the algorithms are satisfied. For instance, if heterogeneous modules are automatically composed to form novel algorithms which answer a question, one should be able to verify that the conditions associated with the appropriate use of the modules have been met. A key requirement for interpretability is a representation of the inference mechanism which supports both symbolic and sub-symbolic inference. For example, a dual (or hybrid) representation which supports both deductive inference through symbol manipulation and inductive inference over data observations using statistical methods will be needed. Better still, a representation which allows for a fluid translation between these representations will be useful.

  • Generalizability:

    It is also important to think of QA problems at a much broader level beyond the narrow vertical perspectives, such as image recognition, prediction or classification tasks, in order to build the capabilities of the AI systems for algorithmic reasoning. One of the criticisms of narrow AI is the fact that they solve very specific problems well, but rarely capture most of the complexities which need to be dealt with in real-world applications. Most of these complexities are often handled by an expert. A desirable feature of AI systems in QA which performs algorithmic reasoning is that they are not restricted to neatly defined problems in benchmark datasets which sometimes lead to over-engineering of AI architectures that are built to exploit biases that are observable in the data set. Additionally, it is desirable for QA systems to be general in how they compose algorithms, both in the aspects of the QA processes that they use and the in kinds of problems that they can solve.

  • Robustness: Finally, there is a need to build AI systems that are robust in the presence of noise, incomplete data and uncertainty. Robustness is also needed as knowledge changes or new knowledge is acquired. These are obvious problems that are faced when using AI in the real-world and so working only on problems or data sets that exclude these challenges results in AI systems which are brittle. In algorithmic reasoning in particular, it is necessary to build AI systems which are able to identify these uncertainties and incorporate them in the inference process and in the automatic generation of programs to solve problems. For instance, failures to access data or inconsistencies in data retrieved from KBs should not stop the QA system from finding answers to questions if alternative strategies for solving the question can be found using a different algorithm. However, how the AI system deals with such issues should be transparent to users.

In summary, many of the debates about symbolic versus sub-symbolic AI cease to exist when the scope of the problem being solved is viewed in its entirety; i.e. to include not only the specific task of prediction or classification, but other intermediate reasoning and decision steps (see figure 1) which are often performed by the creators of the AI system and left out of the scope of what the system does.

4 Deep Algorithmic QA: Hybrid + Compositionality

Our proposed approach to algorithmic reasoning for question answering, DAQA, leverages both hybrid AI and compositionality. Specifically, we are interested not only in a narrow aspect of the question answering task, but in the often ignored aspects of the tasks usually hidden under the list of things which an engineer or expert user does. DAQA is deep in two senses: (1) the inference graphs constructed are deeper than the initial semantic parse trees of the question; (2) it uses deep neural networks as part of the inference framework.

We use the following question example to shed light on the different aspects of our proposal: “What will be the population of the country in Europe which is predicted to have the highest GDP in 2032?”.

4.1 Motivation

First, we make no assumptions about the presence of data needed to answer the question. We only assume that the AI system has a list of different KBs than it can access. These could be web documents sources that it has crawled, publicly available knowledge graphs with interfaces for querying data (e.g. SPARQL

[35] or a web-based application programming interface (API)). This means that the choice of KBs to query and the integration of data from diverse sources is not trivial. Different modalities (text, images, videos) and formalisms (unstructured text, RDF, graph, probabilistic, etc.) make the task all the more difficult.

Second, we do not assume that the answer is pre-stored in any KB. In the question above, chances of having an exact answer stored some knowledge is very low to non-existent. As such, the only way to solve this question is to reason about it and dynamically construct an algorithm that can solve it.

Third, we claim that creating a deep neural network model which is trained in an end-to-end way to tackle open-domain QA including questions of the kind that we have above is not practical with the present state of the technology. However, simpler neural network models are available for solving aspects of the problem, such as the semantic parsing task and the prediction task. This brings to the fore a need for a compositional approach. That is, general purpose neural network models, statistical and arithmetic inference operations can be composed in a dynamic way to construct an appropriate algorithm that solves the question. Constraints on the individual inference modules such as pre-conditions and post-conditions ensure that they are composed in a computationally valid way.

Fourth, we do not assume that a correct semantic parse of the question is enough to compose a program which answers the question. In addition to semantic parsing, it is necessary to reason about the question to explore possible algorithms which could solve it (see figure 2). In the above question, for example, there are tasks such as prediction that will not be explicit in the parse tree. As such, it is important to consider deductive methods to decompose the problem. Additionally, such decomposition needs to be recursive and be robust in the event of a failure to infer an answer by exploring different possible deductions simultaneously. Hybrid AI plays a significant role here as it provide a substrate on which to perform reasoning in the inference process while offering a more rigorous inductive mechanisms to draw inferences from data.

Figure 3: (a) Shows the base inference graph with a question node and an answer node that is to be inferred. They are linked by an edge that can be split by applying decomposition operations on the question node. (b) An inference graph made up of functional nodes and edges labelled by operations for predicting decomposition and aggregation functions. Decomposition sub-graph (in red) is guided by a function that decomposes a functional node to create new continuations of the inference graph, and aggregation sub-graph (in green) which uses a model to select appropriate functions to combine nodes. Functional nodes provide both a symbolic and vector representation of the node’s attribute-value internal representation, as well as function and for converting between the two representations.

4.2 Proposed Model

A fundamental part of the above motivation is that of knowledge representation which supports both hybrid AI and compositionality. Although we leverage symbolic AI methods, we do not propose a classic expert system-styled mechanism. Instead, we propose the idea of hybrid inference graphs with functional nodes and illustrate these in figure 3. An inference graph is constructed and expanded dynamically through the decompositions of its functional nodes. Functional nodes represent three things:

  1. data: includes parsed information from the question, data to be inferred (represented by variables to be inferred) or data retrieved from KBs or inferred and propagated from other functional nodes.

  2. the functional operations to be applied, e.g. regression. These operation could themselves be neural networks for prediction, classification, etc.

  3. a model to convert between the symbolic and vectorized representation of the functional node, possibly obtained through an aggregation of the embeddings of its elements.

Functional nodes, therefore, provide support for both the symbolic manipulation of objects and the vector representation which can leverage the capabilities of DL. The edges linking functional nodes in the graph represent rules or transition functions from the state of the functional node to the next. This provides a mechanism for decomposing functional nodes, thereby expanding the frontier of the inference graph. The rules can be provided or learned from data. Techniques developed in reinforcement learning can be used to learn these transitions functions in order to predict subsequent decompositions of nodes on the inference graph from a handful of rules and the pre- and post-conditions of the various inference operations.

Training this system as a whole to answer questions can be achieved in two ways. First, one may use some form of weak (or distant) supervision signal such as the question and the expected answer. However, constructing such as large dataset is an expensive and prohibitive process. An alternative is to leverage existing datasets to train the individual modules and learn a model that complements the deduction process by predicting candidate decompositions to be applied and the choice of appropriate operation for aggregating functional nodes.

As new knowledge becomes available, the different sub-models needed to construct the inference graph, e.g. the functions and , can updated without having to re-train the entire system. Also, as prior knowledge changes, the representation in the functional nodes can be updated. Similar to the method used in [24], uncertainty values can be inferred and stored in one of the attribute-value pairs. This can be the basis of Bayesian updates as prior knowledge from KBs changes.

5 Discussion

Our proposed approach brings on board novel perspectives on AI for question answering. However, it also builds on some other related ideas and methodologies.

While there have been QA techniques that perform deductive reasoning during inference (e.g. [11]) using operations such as query decomposition and rewriting, they lack the machinery to perform inductive reasoning using more detailed arithmetic and statistical operations. Recent methods such as the FRANK system [26], [6]) in the FRANK QA system adopt a hybrid inference architecture which allows for deductive reasoning using rules in a recursive manner and aggregation of data for prediction using a variety of inference operations including pre-trained neural network models. However, this approach lacks (1) a neural representation of inference nodes and (2) the ability to intelligently search through the space of inference operations for the appropriate ones to use in order to make inference more efficient. Recent work attempts to improve the automatic selection of kernels for Gaussian Process regression [12]. That said, the recursive approach used allows for the dynamic composition of modular inference operations beyond the one constructed from the syntactic or semantic parse of the question.

Many of the QA methods discussed in §2.2 and §2.3 generate programs based on the parse trees from the natural language question, and do not perform any further deductive reasoning or decompositions. As discussed in the respective sections, they are focused on other neuro-symbolic tasks such as integrating knowledge into neural networks and do tackle many of the tasks discussed in §4.

Although [33]

proposes ideas for achieving neural algorithmic reasoning, it differs from our proposal in two main ways. First, the notion of algorithms is at a different level of granularity. The focus in that paper is on ‘lower’ level algorithms such as sorting. More complex algorithms which involve higher level operations such as regression for prediction and many other arithmetic and statistical operations are not explored. Secondly, estimating the outputs of the algorithm using a purely neural network approach still suffers from a lack of interpretability given that it is still a black-box from the perspective of a user. This makes it very hard to verify that the neural network is executing the algorithms correctly.

Nevertheless, our proposed model also has some difficulties that need to be overcome. First, constructing a model which allows for the seamless conversion between symbolic and vector representations of the functional nodes across multiple domains is a hard problem and is still an active research area in neuro-symbolic AI. The space of decomposition and aggregation operations is also very large, so, appropriate search optimisations and heuristics will have to be developed to make it tractable. Finally, training the model as a whole will be very hard, but reusing and fine-tuning pre-trained models in a plug-and-play manner within the inference architecture may be a possible solution.

6 Conclusion

The problem of algorithmic reasoning is one that fits well with the domain of QA since it helps to automate several aspects of the QA pipeline and leads to interpretable models for answering questions. We claim that a hybrid approach to AI with a strong element of compositionality is needed to tackle such a perspective on QA and other AI problems. We have proposed a systems approach to AI which leverages both symbolic and sub-symbolic methods in a framework that leads to solutions which are not possible by either one of these paradigms alone.

Acknowledgment

The author would like to thank Vaishak Belle, Alan Bundy and Thomas Fletcher for feedback on an earlier draft and Huawei for supporting the research on which this paper was based under grant HO2017050001B8s.

References

  • [1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein (2016-06) Learning to compose neural networks for question answering. arXiv:1601.01705 [cs]. External Links: 1601.01705 Cited by: §2.3.
  • [2] V. Belle (2020-06) Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains. arXiv:2006.08480 [cs]. External Links: 2006.08480 Cited by: §2.2.
  • [3] Y. Bengio, Y. LeCun, and G. Hinton (2021-07) Deep learning for AI. Communications of the ACM 64 (7), pp. 58–65. External Links: ISSN 0001-0782, 1557-7317, Document Cited by: §2.2.
  • [4] T. R. Besold, A. d’Avila Garcez, S. Bader, H. Bowman, P. Domingos, P. Hitzler, K. Kuehnberger, L. C. Lamb, D. Lowd, P. M. V. Lima, L. de Penning, G. Pinkas, H. Poon, and G. Zaverucha (2017-11) Neural-symbolic learning and reasoning: A survey and interpretation. arXiv:1711.03902 [cs]. External Links: 1711.03902 Cited by: §2.2, §2.2.
  • [5] A. Bordes, N. Usunier, S. Chopra, and J. Weston (2015-06) Large-scale Simple Question Answering with Memory Networks. arXiv:1506.02075 [cs]. External Links: 1506.02075 Cited by: §2.2.
  • [6] A. Bundy, K. Nuamah, and C. Lucas (2018) Automated reasoning in the age of the internet. In International Conference on Artificial Intelligence and Symbolic Computation, pp. 3–18. Cited by: §2.1, §2.2, §5.
  • [7] F. G. Cozman and H. N. Munhoz (2021-09) Some thoughts on knowledge-enhanced machine learning. International Journal of Approximate Reasoning 136, pp. 308–324. External Links: ISSN 0888613X, Document Cited by: §2.2.
  • [8] DARPA (2018) AI Next Campaign. Note: https://www.darpa.mil/work-with-us/ai-next-campaign Cited by: §2.2.
  • [9] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019-05) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. External Links: 1810.04805 Cited by: §1.
  • [10] M. Dubey, D. Banerjee, A. Abdelkawi, and J. Lehmann (2019)

    LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia

    .
    In The Semantic Web – ISWC 2019, C. Ghidini, O. Hartig, M. Maleshkova, V. Svátek, I. Cruz, A. Hogan, J. Song, M. Lefrançois, and F. Gandon (Eds.), Vol. 11779, pp. 69–78. External Links: Document, ISBN 978-3-030-30795-0 978-3-030-30796-7 Cited by: §1.
  • [11] A. Fader, L. Zettlemoyer, and O. Etzioni (2014-08) Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1156–1165. External Links: Document, ISBN 978-1-4503-2956-9 Cited by: §2.1, §5.
  • [12] T. Fletcher, A. Bundy, and K. Nuamah (2021) GPy-ABCD: A Configurable Automatic Bayesian Covariance Discovery Implementation. Cited by: §5.
  • [13] A. L. Gaunt, M. Brockschmidt, N. Kushman, and D. Tarlow (2017) Differentiable programs with neural libraries. International Conference on Machine Learning. PMLR, pp. 10. Cited by: §2.3.
  • [14] A. Graves, G. Wayne, and I. Danihelka (2014-12) Neural turing machines. arXiv:1410.5401 [cs]. External Links: 1410.5401 Cited by: §2.2, §2.3.
  • [15] B. F. Green, A. K. Wolf, C. Chomsky, and K. Laughery (1961) Baseball: an automatic question-answerer. In Papers Presented at the May 9-11, 1961, Western Joint IRE-AIEE-ACM Computer Conference on - IRE-AIEE-ACM ’61 (Western), pp. 219. External Links: Document Cited by: §1.
  • [16] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick (2017-07) CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 1988–1997. External Links: Document, ISBN 978-1-5386-0457-1 Cited by: §2.3.
  • [17] J. Johnson, B. Hariharan, L. Van Der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. Girshick (2017-10) Inferring and Executing Programs for Visual Reasoning. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3008–3017. External Links: Document, ISBN 978-1-5386-1032-9 Cited by: §2.3.
  • [18] D. Kahneman (2012) Thinking, fast and slow. Penguin Psychology, Penguin Books. External Links: ISBN 978-0-14-103357-0 Cited by: §2.2.
  • [19] P. Kapanipathi, I. Abdelaziz, S. Ravishankar, S. Roukos, A. Gray, R. Astudillo, M. Chang, C. Cornelio, S. Dana, A. Fokoue, D. Garg, A. Gliozzo, S. Gurajada, H. Karanam, N. Khan, D. Khandelwal, Y. Lee, Y. Li, F. Luus, N. Makondo, N. Mihindukulasooriya, T. Naseem, S. Neelam, L. Popa, R. Reddy, R. Riegel, G. Rossiello, U. Sharma, G. P. S. Bhargav, and M. Yu (2020-12) Question answering over knowledge bases by leveraging semantic parsing and neuro-symbolic reasoning. arXiv:2012.01707 [cs]. External Links: 2012.01707 Cited by: §2.3.
  • [20] F. Kröger (1977) LAR: a logic of algorithmic reasoning. Acta Informatica 8 (3), pp. 243–266. Cited by: §2.1.
  • [21] L. Lamb, A. Garcez, M. Gori, M. Prates, P. Avelar, and M. Vardi (2020) Graph neural networks meet neural-symbolic computing: A survey and perspective. In IJCAI-PRICAI 2020-29th International Joint Conference on Artificial Intelligence-Pacific Rim International Conference on Artificial Intelligence, Cited by: §2.2.
  • [22] C. Liang, J. Berant, Q. Le, K. D. Forbus, and N. Lao (2017-04) Neural symbolic machines: learning semantic parsers on freebase with weak supervision. arXiv:1611.00020 [cs]. External Links: 1611.00020 Cited by: §2.3.
  • [23] P. Liang, M. I. Jordan, and D. Klein (2013-06) Learning Dependency-Based Compositional Semantics. Computational Linguistics 39 (2), pp. 389–446. External Links: ISSN 0891-2017, 1530-9312, Document Cited by: §2.1.
  • [24] R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, and L. De Raedt (2018)

    DeepProbLog: Neural probabilistic logic programming

    .
    Advances in Neural Information Processing Systems 31, pp. 3749–3759. Cited by: §2.2, §4.2.
  • [25] G. Marcus and E. Davis (2019) Rebooting AI: Building artificial intelligence we can trust. Cited by: 5th item, §2.1.
  • [26] K. Nuamah, A. Bundy, and C. Lucas (2016) Functional inferences over heterogeneous data. In International Conference on Web Reasoning and Rule Systems, pp. 159–166. Cited by: §2.1, §5.
  • [27] K. Nuamah and A. Bundy (2020) Explainable inference in the frank query answering system. In ECAI 2020, pp. 2441–2448. Cited by: §2.2.
  • [28] K. Nuamah (2018) Functional inferences over heterogeneous data. Ph.D. Thesis, School of Informatics, University of Edinburgh. Cited by: §2.2.
  • [29] R. Riegel, A. Gray, F. Luus, N. Khan, N. Makondo, I. Y. Akhalwaya, H. Qian, R. Fagin, F. Barahona, U. Sharma, S. Ikbal, H. Karanam, S. Neelam, A. Likhyani, and S. Srivastava (2020-06) Logical neural networks. arXiv:2006.13155 [cs]. External Links: 2006.13155 Cited by: §2.2.
  • [30] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis (2016-01) Mastering the game of Go with deep neural networks and tree search. Nature 529 (7587), pp. 484–489. External Links: ISSN 0028-0836, 1476-4687, Document Cited by: §2.2.
  • [31] E. Tsamoura, T. Hospedales, and M. Loizos (2021) Neural-symbolic integration: a compositional perspective. Proceedings of the AAAI Conference on Artificial Intelligence 35 (6). Cited by: §2.3.
  • [32] R. Usbeck, A. N. Ngomo, B. Haarmann, A. Krithara, M. Roder, and G. Napolitano (2017) 7th Open Challenge on Question Answering over Linked Data (QALD-7). pp. 11. Cited by: §1.
  • [33] P. Veličković and C. Blundell (2021-05) Neural algorithmic reasoning. arXiv:2105.02761 [cs, math, stat]. External Links: 2105.02761 Cited by: §1, §2.1, §2.1, §5.
  • [34] J. Weston, S. Chopra, and A. Bordes (2015-11) Memory Networks. arXiv:1410.3916 [cs, stat]. External Links: 1410.3916 Cited by: §2.2.
  • [35] World Wide Web Consortium, W3C (2013) SPARQL 1.1 overview. External Links: Link Cited by: §4.1.
  • [36] K. Yi, J. Wu, C. Gan, A. Torralba, P. Kohli, and J. B. Tenenbaum (2019-01) Neural-symbolic VQA: disentangling reasoning from vision and language understanding. arXiv:1810.02338 [cs]. External Links: 1810.02338 Cited by: §2.3.
  • [37] W. Zaremba and I. Sutskever (2016-01) Reinforcement learning neural turing machines - revised. arXiv:1505.00521 [cs]. External Links: 1505.00521 Cited by: §2.2.