DeepAI AI Chat
Log In Sign Up

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

by   Denny Zhou, et al.

We propose a novel prompting strategy, least-to-most prompting, that enables large language models to better perform multi-step reasoning tasks. Least-to-most prompting first reduces a complex problem into a list of subproblems, and then sequentially solves the subproblems, whereby solving a given subproblem is facilitated by the model's answers to previously solved subproblems. Experiments on symbolic manipulation, compositional generalization and numerical reasoning demonstrate that least-to-most prompting can generalize to examples that are harder than those seen in the prompt context, outperforming other prompting-based approaches by a large margin. A notable empirical result is that the GPT-3 code-davinci-002 model with least-to-most-prompting can solve the SCAN benchmark with an accuracy of 99.7 using 14 examples. As a comparison, the neural-symbolic models in the literature specialized for solving SCAN are trained with the full training set of more than 15,000 examples.


page 1

page 2

page 3

page 4


ALERT: Adapting Language Models to Reasoning Tasks

Current large language models can perform reasonably well on complex tas...

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

We consider the problem of eliciting compositional generalization capabi...

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

Pre-trained language models (LMs) have shown remarkable reasoning perfor...

A Hybrid System for Systematic Generalization in Simple Arithmetic Problems

Solving symbolic reasoning problems that require compositionality and sy...

Learning to Reason and Memorize with Self-Notes

Large language models have been shown to struggle with limited context m...

Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning

Out-of-distribution generalization (OODG) is a longstanding challenge fo...