Transformers as Soft Reasoners over Language

02/14/2020
by   Peter Clark, et al.
0

AI has long pursued the goal of having systems reason over *explicitly provided* knowledge, but building suitable representations has proved challenging. Here we explore whether transformers can similarly learn to reason (or emulate reasoning), but using rules expressed in language, thus bypassing a formal representation. We provide the first demonstration that this is possible, and characterize the extent of this capability. To do this, we use a collection of synthetic datasets that test increasing levels of reasoning complexity (number of rules, presence of negation, and depth of chaining). We find transformers appear to learn rule-based reasoning with high (99 on these datasets, and in a way that generalizes to test data requiring substantially deeper chaining than in the training data (95 demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural language. These findings are significant as it suggests a new role for transformers, namely as a limited "soft theorem prover" operating over explicit theories in language. This in turn suggests new possibilities for explainability, correctability, and counterfactual reasoning in question-answering. All datasets and a live demo are available at http://rule-reasoning.apps.allenai.org/

READ FULL TEXT

page 7

page 8

research
12/17/2020

Can Transformers Reason About Effects of Actions?

A recent work has shown that transformers are able to "reason" with fact...
research
03/23/2022

AbductionRules: Training Transformers to Explain Unexpected Inputs

Transformers have recently been shown to be capable of reliably performi...
research
12/16/2021

Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability

Investigating the reasoning abilities of transformer models, and discove...
research
11/23/2021

Learning Symbolic Rules for Reasoning in Quasi-Natural Language

Symbolic reasoning, rule-based symbol manipulation, is a hallmark of hum...
research
01/05/2022

Does entity abstraction help generative Transformers reason?

Pre-trained language models (LMs) often struggle to reason logically or ...
research
06/14/2019

NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

Rule-based models are attractive for various tasks because they inherent...
research
04/02/2021

VisQA: X-raying Vision and Language Reasoning in Transformers

Visual Question Answering systems target answering open-ended textual qu...

Please sign up or login with your details

Forgot password? Click here to reset