Chain of Thought Prompting Elicits Reasoning in Large Language Models

01/28/2022
by   Jason Wei, et al.
9

Although scaling up language model size has reliably improved performance on a range of NLP tasks, even the largest models currently struggle with certain reasoning tasks such as math word problems, symbolic manipulation, and commonsense reasoning. This paper explores the ability of language models to generate a coherent chain of thought – a series of short sentences that mimic the reasoning process a person might have when responding to a question. Experiments show that inducing a chain of thought via prompting can enable sufficiently large language models to better perform reasoning tasks that otherwise have flat scaling curves.

READ FULL TEXT

page 21

page 24

research
12/16/2022

Teaching Small Language Models to Reason

Chain of thought prompting successfully improves the reasoning capabilit...
research
10/06/2022

Language Models are Multilingual Chain-of-Thought Reasoners

We evaluate the reasoning abilities of large language models in multilin...
research
09/16/2022

Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango

Reasoning is a key pillar of human cognition and intelligence. In the pa...
research
10/20/2022

Transcending Scaling Laws with 0.1

Scaling language models improves performance but comes with significant ...
research
01/27/2023

ThoughtSource: A central hub for large language model reasoning data

Large language models (LLMs) such as GPT-3 and ChatGPT have recently dem...
research
09/07/2023

Exploring an LM to generate Prolog Predicates from Mathematics Questions

Recently, there has been a surge in interest in NLP driven by ChatGPT. C...
research
05/22/2023

Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

Analogical reasoning is essential for human cognition, allowing us to co...

Please sign up or login with your details

Forgot password? Click here to reset