Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

05/21/2022
by   Denny Zhou, et al.
1

We propose a novel prompting strategy, least-to-most prompting, that enables large language models to better perform multi-step reasoning tasks. Least-to-most prompting first reduces a complex problem into a list of subproblems, and then sequentially solves the subproblems, whereby solving a given subproblem is facilitated by the model's answers to previously solved subproblems. Experiments on symbolic manipulation, compositional generalization and numerical reasoning demonstrate that least-to-most prompting can generalize to examples that are harder than those seen in the prompt context, outperforming other prompting-based approaches by a large margin. A notable empirical result is that the GPT-3 code-davinci-002 model with least-to-most-prompting can solve the SCAN benchmark with an accuracy of 99.7 using 14 examples. As a comparison, the neural-symbolic models in the literature specialized for solving SCAN are trained with the full training set of more than 15,000 examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2022

ALERT: Adapting Language Models to Reasoning Tasks

Current large language models can perform reasonably well on complex tas...
research
08/01/2023

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

We consider the problem of eliciting compositional generalization capabi...
research
12/16/2022

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

Pre-trained language models (LMs) have shown remarkable reasoning perfor...
research
06/29/2023

A Hybrid System for Systematic Generalization in Simple Arithmetic Problems

Solving symbolic reasoning problems that require compositionality and sy...
research
05/01/2023

Learning to Reason and Memorize with Self-Notes

Large language models have been shown to struggle with limited context m...
research
10/07/2022

Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning

Out-of-distribution generalization (OODG) is a longstanding challenge fo...
research
10/12/2022

CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations

Well-designed diagnostic tasks have played a key role in studying the fa...

Please sign up or login with your details

Forgot password? Click here to reset