DeepAI AI Chat
Log In Sign Up

Exploring Alignment of Representations with Human Perception

11/29/2021
by   Vedant Nanda, et al.
Max Planck Institute for Software Systems
4

We argue that a valuable perspective on when a model learns good representations is that inputs that are mapped to similar representations by the model should be perceived similarly by humans. We use representation inversion to generate multiple inputs that map to the same model representation, then quantify the perceptual similarity of these inputs via human surveys. Our approach yields a measure of the extent to which a model is aligned with human perception. Using this measure of alignment, we evaluate models trained with various learning paradigms ( supervised and self-supervised learning) and different training losses (standard and robust training). Our results suggest that the alignment of representations with human perception provides useful additional insights into the qualities of a model. For example, we find that alignment with human perception can be used as a measure of trust in a model's prediction on inputs where different models have conflicting outputs. We also find that various properties of a model like its architecture, training paradigm, training loss, and data augmentation play a significant role in learning representations that are aligned with human perception.

READ FULL TEXT

page 2

page 6

page 13

page 15

01/27/2023

Alignment with human representations supports robust few-shot learning

Should we care whether AI systems have representations of the world that...
09/26/2020

SEMI: Self-supervised Exploration via Multisensory Incongruity

Efficient exploration is a long-standing problem in reinforcement learni...
05/30/2023

Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness

One of the remarkable properties of robust computer vision models is tha...
06/03/2019

Learning Perceptually-Aligned Representations via Adversarial Robustness

Many applications of machine learning require models that are human-alig...
09/10/2022

Self-supervised Human Mesh Recovery with Cross-Representation Alignment

Fully supervised human mesh recovery methods are data-hungry and have po...
07/16/2019

Perception of visual numerosity in humans and machines

Numerosity perception is foundational to mathematical learning, but its ...
06/14/2021

Revisiting Model Stitching to Compare Neural Representations

We revisit and extend model stitching (Lenc Vedaldi 2015) as a metho...