Behavioral estimates of conceptual structure are robust across tasks in humans but not large language models

04/05/2023
by   Siddharth Suresh, et al.
0

Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic tasks. In contemporary language AIs, however, it is possible to interrogate the latent structure of conceptual representations using methods nearly identical to those commonly used with human participants. The current work uses two common techniques borrowed from cognitive psychology to estimate and compare lexical-semantic structure in both humans and a well-known AI, the DaVinci variant of GPT-3. In humans, we show that conceptual structure is robust to differences in culture, language, and method of estimation. Structures estimated from AI behavior, while individually fairly consistent with those estimated from human behavior, depend much more upon the particular task used to generate behavior responses–responses generated by the very same model in the two tasks yield estimates of conceptual structure that cohere less with one another than do human structure estimates. The results suggest one important way that knowledge inhering in contemporary AIs can differ from human cognition.

READ FULL TEXT

page 2

page 4

page 6

research
04/12/2023

Semantic Feature Verification in FLAN-T5

This study evaluates the potential of a large language model for aiding ...
research
11/02/2022

Human alignment of neural network representations

Today's computer vision models achieve human or near-human level perform...
research
02/19/2023

Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference

How humans infer discrete emotions is a fundamental research question in...
research
05/30/2023

Does Conceptual Representation Require Embodiment? Insights From Large Language Models

Recent advances in large language models (LLM) have the potential to she...
research
04/29/2015

On the universal structure of human lexical semantics

How universal is human conceptual structure? The way concepts are organi...
research
11/19/2018

A Map of Knowledge

Knowledge representation has gained in relevance as data from the ubiqui...
research
02/08/2022

Semantic features of object concepts generated with GPT-3

Semantic features have been playing a central role in investigating the ...

Please sign up or login with your details

Forgot password? Click here to reset