Explainable Natural Language Reasoning via Conceptual Unification

09/30/2020
by   Marco Valentino, et al.
0

This paper presents an abductive framework for multi-hop and interpretable textual inference. The reasoning process is guided by the notions unification power and plausibility of an explanation, computed through the interaction of two major architectural components: (a) An analogical reasoning model that ranks explanatory facts by leveraging unification patterns in a corpus of explanations; (b) An abductive reasoning model that performs a search for the best explanation, which is realised via conceptual abstraction and subsequent unification. We demonstrate that the Step-wise Conceptual Unification can be effective for unsupervised question answering, and as an explanation extractor in combination with state-of-the-art Transformers. An empirical evaluation on the Worldtree corpus and the ARC Challenge resulted in the following conclusions: (1) The question answering model outperforms competitive neural and multi-hop baselines without requiring any explicit training on answer prediction; (2) When used as an explanation extractor, the proposed model significantly improves the performance of Transformers, leading to state-of-the-art results on the Worldtree corpus; (3) Analogical and abductive reasoning are highly complementary for achieving sound explanatory inference, a feature that demonstrates the impact of the unification patterns on performance and interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Autoregressive Reasoning over Chains of Facts with Transformers

This paper proposes an iterative inference algorithm for multi-hop expla...
research
10/25/2019

QASC: A Dataset for Question Answering via Sentence Composition

Composing knowledge from multiple pieces of texts is a key challenge in ...
research
07/25/2021

Hybrid Autoregressive Solver for Scalable Abductive Natural Language Inference

Regenerating natural language explanations for science questions is a ch...
research
09/07/2021

Exploiting Reasoning Chains for Multi-hop Science Question Answering

We propose a novel Chain Guided Retriever-reader (CGR) framework to mode...
research
03/31/2020

Unification-based Reconstruction of Explanations for Science Questions

The paper presents a framework to reconstruct explanations for multiple ...
research
08/05/2022

Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

Integer Linear Programming (ILP) provides a viable mechanism to encode e...
research
09/30/2020

Explaining AI as an Exploratory Process: The Peircean Abduction Model

Current discussions of "Explainable AI" (XAI) do not much consider the r...

Please sign up or login with your details

Forgot password? Click here to reset