Emergent Analogical Reasoning in Large Language Models

12/19/2022
by   Taylor Webb, et al.
0

The recent advent of large language models - large neural networks trained on a simple predictive objective over a massive corpus of natural language - has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training on those problems. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (GPT-3) on a range of analogical tasks, including a novel text-based matrix reasoning task closely modeled on Raven's Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

READ FULL TEXT

page 4

page 6

page 8

page 18

page 19

research
08/30/2023

Response: Emergent analogical reasoning in large language models

In their recent Nature Human Behaviour paper, "Emergent analogical reaso...
research
05/28/2023

In-Context Analogical Reasoning with Pre-Trained Language Models

Analogical reasoning is a fundamental capacity of human cognition that a...
research
05/23/2023

Can Large Language Models Infer and Disagree Like Humans?

Large Language Models (LLMs) have shown stellar achievements in solving ...
research
08/03/2023

Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors

Recent advances in the performance of large language models (LLMs) have ...
research
07/10/2023

Large Language Models as General Pattern Machines

We observe that pre-trained large language models (LLMs) are capable of ...
research
03/24/2021

Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2

Thinking aloud is an effective meta-cognitive strategy human reasoners a...
research
09/29/2022

Zero-shot visual reasoning through probabilistic analogical mapping

Human reasoning is grounded in an ability to identify highly abstract co...

Please sign up or login with your details

Forgot password? Click here to reset