Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

11/29/2018
by   Francesco Locatello, et al.
4

In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties `encouraged' by the corresponding losses. On the negative side, we observe that in our study (1) `good' hyperparameters seemingly cannot be identified without access to ground-truth labels, (2) good hyperparameters neither transfer across data sets nor across disentanglement metrics, and (3) that increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.

READ FULL TEXT

page 8

page 13

page 17

page 20

page 21

page 22

research
10/27/2020

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

The idea behind the unsupervised learning of disentangled representation...
research
07/28/2020

A Commentary on the Unsupervised Learning of Disentangled Representations

The goal of the unsupervised learning of disentangled representations is...
research
06/07/2019

On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

Learning meaningful and compact representations with structurally disent...
research
05/03/2019

Disentangling Factors of Variation Using Few Labels

Learning disentangled representations is considered a cornerstone proble...
research
10/02/2022

Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language

Deep learning models struggle with compositional generalization, i.e. th...
research
10/07/2021

Boxhead: A Dataset for Learning Hierarchical Representations

Disentanglement is hypothesized to be beneficial towards a number of dow...
research
07/21/2020

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

We construct an unsupervised learning model that achieves nonlinear dise...

Please sign up or login with your details

Forgot password? Click here to reset