Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability

12/16/2021
by   Kyle Richardson, et al.
0

Investigating the reasoning abilities of transformer models, and discovering new challenging tasks for them, has been a topic of much interest. Recent studies have found these models to be surprisingly strong at performing deductive reasoning over formal logical theories expressed in natural language. A shortcoming of these studies, however, is that they do not take into account that logical theories, when sampled uniformly at random, do not necessarily lead to hard instances. We propose a new methodology for creating challenging algorithmic reasoning datasets that focus on natural language satisfiability (NLSat) problems. The key idea is to draw insights from empirical sampling of hard propositional SAT problems and from complexity-theoretic studies of language. This methodology allows us to distinguish easy from hard instances, and to systematically increase the complexity of existing reasoning benchmarks such as RuleTaker. We find that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty. They also exhibit some degree of scale-invariance - the ability to generalize to problems of larger size and scope. Our results, however, reveal important limitations too: a careful sampling of training data is crucial for building models that generalize to larger problems, and transformer models' limited scale-invariance suggests they are far from learning robust deductive reasoning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2022

Can Transformers Reason in Fragments of Natural Language?

State-of-the-art deep-learning-based approaches to Natural Language Proc...
research
02/14/2020

Transformers as Soft Reasoners over Language

AI has long pursued the goal of having systems reason over *explicitly p...
research
03/23/2022

AbductionRules: Training Transformers to Explain Unexpected Inputs

Transformers have recently been shown to be capable of reliably performi...
research
07/11/2022

Exploring Length Generalization in Large Language Models

The ability to extrapolate from short problem instances to longer ones i...
research
09/30/2020

Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models ...
research
10/07/2022

Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning

Out-of-distribution generalization (OODG) is a longstanding challenge fo...
research
10/19/2021

Generating Symbolic Reasoning Problems with Transformer GANs

Constructing training data for symbolic reasoning domains is challenging...

Please sign up or login with your details

Forgot password? Click here to reset