DeepAI AI Chat
Log In Sign Up

AbductionRules: Training Transformers to Explain Unexpected Inputs

by   Nathan Young, et al.

Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability. We present AbductionRules, a group of natural language datasets designed to train and test generalisable abduction over natural-language knowledge bases. We use these datasets to finetune pretrained Transformers and discuss their performance, finding that our models learned generalisable abductive techniques but also learned to exploit the structure of our data. Finally, we discuss the viability of this approach to abductive reasoning and ways in which it may be improved in future work.


page 6

page 7


Can Transformers Reason About Effects of Actions?

A recent work has shown that transformers are able to "reason" with fact...

Extending Equational Monadic Reasoning with Monad Transformers

There is a recent interest for the verification of monadic programs usin...

Transformers as Soft Reasoners over Language

AI has long pursued the goal of having systems reason over *explicitly p...

Neural Unification for Logic Reasoning over Natural Language

Automated Theorem Proving (ATP) deals with the development of computer p...

Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability

Investigating the reasoning abilities of transformer models, and discove...

Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System

We present an approach for systematic reasoning that produces human inte...

Semantic Vector Spaces for Broadening Consideration of Consequences

Reasoning systems with too simple a model of the world and human intent ...