Numeric Magnitude Comparison Effects in Large Language Models

05/18/2023
by   Raj Sanjay Shah, et al.
0

Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how well popular LLMs capture the magnitudes of numbers (e.g., that 4 < 5) from a behavioral lens. Prior research on the representational capabilities of LLMs evaluates whether they show human-level performance, for instance, high overall accuracy on standard benchmarks. Here, we ask a different question, one inspired by cognitive science: How closely do the number representations of LLMscorrespond to those of human language users, who typically demonstrate the distance, size, and ratio effects? We depend on a linking hypothesis to map the similarities among the model embeddings of number words and digits to human response times. The results reveal surprisingly human-like representations across language models of different architectures, despite the absence of the neural circuitry that directly supports these representations in the human brain. This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number of representations of LLMs and their cognitive plausibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2023

Turning large language models into cognitive models

Large language models are powerful systems that excel at many tasks, ran...
research
06/02/2023

Large Language Models Converge on Brain-Like Word Representations

One of the greatest puzzles of all time is how understanding arises from...
research
08/28/2023

Cognitive Effects in Large Language Models

Large Language Models (LLMs) such as ChatGPT have received enormous atte...
research
06/04/2019

Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains

In this paper, we define and apply representational stability analysis (...
research
02/23/2021

Automated Quality Assessment of Cognitive Behavioral Therapy Sessions Through Highly Contextualized Language Representations

During a psychotherapy session, the counselor typically adopts technique...
research
08/07/2023

AI Text-to-Behavior: A Study In Steerability

The research explores the steerability of Large Language Models (LLMs), ...
research
02/02/2023

What Language Reveals about Perception: Distilling Psychophysical Knowledge from Large Language Models

Understanding the extent to which the perceptual world can be recovered ...

Please sign up or login with your details

Forgot password? Click here to reset