Cognitive Effects in Large Language Models

08/28/2023
by   Jonathan Shaki, et al.
0

Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases

This paper investigates whether current large language models exhibit bi...
research
06/06/2023

Turning large language models into cognitive models

Large language models are powerful systems that excel at many tasks, ran...
research
08/15/2023

Using Artificial Populations to Study Psychological Phenomena in Neural Models

The recent proliferation of research into transformer based natural lang...
research
05/08/2023

Do Large Language Models Show Decision Heuristics Similar to Humans? A Case Study Using GPT-3.5

A Large Language Model (LLM) is an artificial intelligence system that h...
research
05/18/2023

Numeric Magnitude Comparison Effects in Large Language Models

Large Language Models (LLMs) do not differentially represent numbers, wh...
research
04/02/2023

Eight Things to Know about Large Language Models

The widespread public deployment of large language models (LLMs) in rece...
research
02/24/2022

Capturing Failures of Large Language Models via Human Cognitive Biases

Large language models generate complex, open-ended outputs: instead of o...

Please sign up or login with your details

Forgot password? Click here to reset