Semantic features of object concepts generated with GPT-3

02/08/2022
by   Hannes Hansen, et al.
0

Semantic features have been playing a central role in investigating the nature of our conceptual representations. Yet the enormous time and effort required to empirically sample and norm features from human raters has restricted their use to a limited set of manually curated concepts. Given recent promising developments with transformer-based language models, here we asked whether it was possible to use such models to automatically generate meaningful lists of properties for arbitrary object concepts and whether these models would produce features similar to those found in humans. To this end, we probed a GPT-3 model to generate semantic features for 1,854 objects and compared automatically-generated features to existing human feature norms. GPT-3 generated many more features than humans, yet showed a similar distribution in the types of generated features. Generated feature norms rivaled human norms in predicting similarity, relatedness, and category membership, while variance partitioning demonstrated that these predictions were driven by similar variance in humans and GPT-3. Together, these results highlight the potential of large language models to capture important facets of human knowledge and yield a new approach for automatically generating interpretable feature sets, thus drastically expanding the potential use of semantic features in psychological and linguistic studies.

READ FULL TEXT

page 2

page 3

page 4

research
04/11/2023

Human-machine cooperation for semantic feature listing

Semantic feature norms, lists of features that concepts do and do not po...
research
04/12/2023

Semantic Feature Verification in FLAN-T5

This study evaluates the potential of a large language model for aiding ...
research
04/16/2023

The language of sounds unheard: Exploring musical timbre semantics of large language models

Semantic dimensions of sound have been playing a central role in underst...
research
05/13/2022

A Property Induction Framework for Neural Language Models

To what extent can experience from language contribute to our conceptual...
research
04/10/2020

On the Existence of Tacit Assumptions in Contextualized Language Models

Humans carry stereotypic tacit assumptions (STAs) (Prince, 1978), or pro...
research
04/05/2023

Behavioral estimates of conceptual structure are robust across tasks in humans but not large language models

Neural network models of language have long been used as a tool for deve...
research
02/09/2022

Predicting Human Similarity Judgments Using Large Language Models

Similarity judgments provide a well-established method for accessing men...

Please sign up or login with your details

Forgot password? Click here to reset