Human alignment of neural network representations

11/02/2022
by   Lukas Muttenthaler, et al.
0

Today's computer vision models achieve human or near-human level performance across a wide variety of vision tasks. However, their architectures, data, and learning algorithms differ in numerous ways from those that give rise to human vision. In this paper, we investigate the factors that affect alignment between the representations learned by neural networks and human concept representations. Human representations are inferred from behavioral responses in an odd-one-out triplet task, where humans were presented with three images and had to select the odd-one-out. We find that model scale and architecture have essentially no effect on alignment with human behavioral responses, whereas the training dataset and objective function have a much larger impact. Using a sparse Bayesian model of human conceptual representations, we partition triplets by the concept that distinguishes the two similar images from the odd-one-out, finding that some concepts such as food and animals are well-represented in neural network representations whereas others such as royal or sports-related objects are not. Overall, although models trained on larger, more diverse datasets achieve better alignment with humans than models trained on ImageNet alone, our results indicate that scaling alone is unlikely to be sufficient to train neural networks with conceptual representations that match those used by humans.

READ FULL TEXT

page 9

page 24

page 30

page 31

page 37

page 38

page 39

page 40

research
04/05/2023

Behavioral estimates of conceptual structure are robust across tasks in humans but not large language models

Neural network models of language have long been used as a tool for deve...
research
09/30/2018

Optical Illusions Images Dataset

Human vision is capable of performing many tasks not optimized for in it...
research
10/13/2020

Transforming Neural Network Visual Representations to Predict Human Judgments of Similarity

Deep-learning vision models have shown intriguing similarities and diffe...
research
06/07/2023

Improving neural network representations using human similarity judgments

Deep neural networks have reached human-level performance on many comput...
research
01/27/2023

Alignment with human representations supports robust few-shot learning

Should we care whether AI systems have representations of the world that...
research
05/02/2022

VICE: Variational Interpretable Concept Embeddings

A central goal in the cognitive sciences is the development of numerical...

Please sign up or login with your details

Forgot password? Click here to reset