Learning to Reason With Relational Abstractions

10/06/2022
by   Andrew J. Nam, et al.
6

Large language models have recently shown promising progress in mathematical reasoning when fine-tuned with human-generated sequences walking through a sequence of solution steps. However, the solution sequences are not formally structured and the resulting model-generated sequences may not reflect the kind of systematic reasoning we might expect an expert human to produce. In this paper, we study how to build stronger reasoning capability in language models using the idea of relational abstractions. We introduce new types of sequences that more explicitly provide an abstract characterization of the transitions through intermediate solution steps to the goal state. We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy, and models that are trained to produce such sequences solve problems better than those that are trained with previously used human-generated sequences and other baselines. Our work thus takes several steps toward elucidating and improving how language models perform on tasks requiring multi-step mathematical reasoning.

READ FULL TEXT
research
02/13/2023

STREET: A Multi-Task Structured Reasoning and Explanation Benchmark

We introduce STREET, a unified multi-task and multi-domain natural langu...
research
06/06/2023

Certified Reasoning with Language Models

Language models often achieve higher accuracy when reasoning step-by-ste...
research
05/28/2023

In-Context Analogical Reasoning with Pre-Trained Language Models

Analogical reasoning is a fundamental capacity of human cognition that a...
research
06/07/2021

Measuring and Improving BERT's Mathematical Abilities by Predicting the Order of Reasoning

Imagine you are in a supermarket. You have two bananas in your basket an...
research
11/16/2022

LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions

Humans tame the complexity of mathematical reasoning by developing hiera...
research
11/03/2022

PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

Neural language models (LMs) have achieved impressive results on various...
research
06/07/2023

STEPS: A Benchmark for Order Reasoning in Sequential Tasks

Various human activities can be abstracted into a sequence of actions in...

Please sign up or login with your details

Forgot password? Click here to reset