Automatic Chain of Thought Prompting in Large Language Models

10/07/2022
by   Zhuosheng Zhang, et al.
0

Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like "Let's think step by step" to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the "Let's think step by step" prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot

READ FULL TEXT
research
02/01/2023

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

Large language models can perform various reasoning tasks by using chain...
research
03/16/2023

ART: Automatic multi-step reasoning and tool-use for large language models

Large language models (LLMs) can perform complex reasoning in few- and z...
research
02/02/2023

Multimodal Chain-of-Thought Reasoning in Language Models

Large language models (LLMs) have shown impressive performance on comple...
research
04/23/2023

Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models

Large language models (LLMs) can achieve highly effective performance on...
research
05/26/2023

Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought

Language instructions and demonstrations are two natural ways for users ...
research
10/03/2022

Complexity-Based Prompting for Multi-Step Reasoning

We study the task of prompting large-scale language models to perform mu...
research
06/06/2023

Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models

Prompt engineering is an essential technique for enhancing the abilities...

Please sign up or login with your details

Forgot password? Click here to reset