Capturing Failures of Large Language Models via Human Cognitive Biases

02/24/2022
by   Erik Jones, et al.
0

Large language models generate complex, open-ended outputs: instead of outputting a single class, they can write summaries, generate dialogue, and produce working code. In order to study the reliability of these open-ended systems, we must understand not just when they fail, but also how they fail. To approach this, we draw inspiration from human cognitive biases – systematic patterns of deviation from rational judgement. Specifically, we use cognitive biases to (i) identify inputs that models are likely to err on, and (ii) develop tests to qualitatively characterize their errors on these inputs. Using code generation as a case study, we find that OpenAI's Codex errs predictably based on how the input prompt is framed, adjusts outputs towards anchors, and is biased towards outputs that mimic frequent training examples. We then use our framework to uncover high-impact errors such as incorrectly deleting files. Our experiments suggest that cognitive science can be a useful jumping-off point to better understand how contemporary machine learning systems behave.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases

This paper investigates whether current large language models exhibit bi...
research
05/22/2023

Cognitive network science reveals bias in GPT-3, ChatGPT, and GPT-4 mirroring math anxiety in high-school students

Large language models are becoming increasingly integrated into our live...
research
09/28/2020

Using Resource-Rational Analysis to Understand Cognitive Biases in Interactive Data Visualizations

Cognitive biases are systematic errors in judgment. Researchers in data ...
research
08/28/2023

Cognitive Effects in Large Language Models

Large Language Models (LLMs) such as ChatGPT have received enormous atte...
research
06/01/2023

ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing

Given the rapid ascent of large language models (LLMs), we study the que...
research
06/21/2023

Mass-Producing Failures of Multimodal Systems with Language Models

Deployed multimodal systems can fail in ways that evaluators did not ant...
research
04/21/2023

Inducing anxiety in large language models increases exploration and bias

Large language models are transforming research on machine learning whil...

Please sign up or login with your details

Forgot password? Click here to reset