Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

07/17/2021
by   Lukas Schott, et al.
14

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world. In this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D). In contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models that learn the correct mechanism should be able to generalize to this benchmark. In total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards more realistic real-world datasets. Despite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factor is out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate generalization.

READ FULL TEXT

page 3

page 6

page 8

page 21

page 22

page 25

research
10/26/2019

Fair Generative Modeling via Weak Supervision

Real-world datasets are often biased with respect to key demographic fac...
research
09/16/2021

DisUnknown: Distilling Unknown Factors for Disentanglement Learning

Disentangling data into interpretable and independent factors is critica...
research
12/14/2020

Odd-One-Out Representation Learning

The effective application of representation learning to real-world probl...
research
12/15/2016

A Survey of Inductive Biases for Factorial Representation-Learning

With the resurgence of interest in neural networks, representation learn...
research
03/04/2021

There and back again: Cycle consistency across sets for isolating factors of variation

Representational learning hinges on the task of unraveling the set of un...
research
09/10/2019

Novel tracking approach based on fully-unsupervised disentanglement of the geometrical factors of variation

Efficient tracking algorithm is a crucial part of particle tracking dete...
research
11/18/2022

Path Independent Equilibrium Models Can Better Exploit Test-Time Computation

Designing networks capable of attaining better performance with an incre...

Please sign up or login with your details

Forgot password? Click here to reset