Differentiable Reasoning on Large Knowledge Bases and Natural Language

12/17/2019
by   Pasquale Minervini, et al.
1

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at https://github.com/uclnlp/gntp.

READ FULL TEXT
research
07/13/2020

Learning Reasoning Strategies in End-to-End Differentiable Proving

Attempts to render deep learning models interpretable, data-efficient, a...
research
07/21/2018

Towards Neural Theorem Proving at Scale

Neural models combining representation learning and reasoning in an end-...
research
05/05/2020

Neural-Symbolic Relational Reasoning on Graph Models: Effective Link Inference and Computation from Knowledge Bases

The recent developments and growing interest in neural-symbolic models h...
research
07/28/2022

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

Combining deep learning with symbolic logic reasoning aims to capitalize...
research
07/06/2021

Probabilistic Graph Reasoning for Natural Proof Generation

In this paper, we investigate the problem of reasoning over natural lang...
research
09/16/2022

Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System

We present an approach for systematic reasoning that produces human inte...
research
12/25/2020

LOREN: Logic Enhanced Neural Reasoning for Fact Verification

Given a natural language statement, how to verify whether it is supporte...

Please sign up or login with your details

Forgot password? Click here to reset