Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought

10/03/2022
by   Abulhair Saparov, et al.
0

Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks measure reasoning ability indirectly, by evaluating accuracy on downstream tasks such as mathematical reasoning. However, it is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LLMs, we present a new synthetic question-answering dataset called PrOntoQA, where each example is generated from a synthetic world model represented in first-order logic. This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite capable of making correct individual deduction steps, and so are generally capable of reasoning, even in fictional contexts. However, they have difficulty with proof planning: When multiple valid deduction steps are available, they are not able to systematically explore the different options.

READ FULL TEXT

page 2

page 7

page 9

page 15

page 17

research
12/16/2022

Teaching Small Language Models to Reason

Chain of thought prompting successfully improves the reasoning capabilit...
research
06/12/2023

Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models

Generating intermediate steps, or Chain of Thought (CoT), is an effectiv...
research
05/03/2023

Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings

Recent advances in large language models elicit reasoning in a chain of ...
research
06/01/2023

Chain-Of-Thought Prompting Under Streaming Batch: A Case Study

Recently, Large Language Models (LLMs) have demonstrated remarkable capa...
research
05/30/2023

GPT4GEO: How a Language Model Sees the World's Geography

Large language models (LLMs) have shown remarkable capabilities across a...
research
05/24/2023

The Art of SOCRATIC QUESTIONING: Zero-shot Multimodal Reasoning with Recursive Thinking and Self-Questioning

Chain-of-Thought prompting (CoT) enables large-scale language models to ...
research
09/16/2022

Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango

Reasoning is a key pillar of human cognition and intelligence. In the pa...

Please sign up or login with your details

Forgot password? Click here to reset