Disentangling Factors of Variation Using Few Labels

05/03/2019
by   Francesco Locatello, et al.
12

Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a very limited amount of supervision, for example through manual labeling of training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 29000 models under well-defined and reproducible experimental conditions. We first observe that a very limited number of labeled examples (0.01--0.5 set) is sufficient to perform model selection on state-of-the-art unsupervised models. Yet, if one has access to labels for supervised model selection, this raises the natural question of whether they should also be incorporated into the training process. As a case-study, we test the benefit of introducing (very limited) supervision into existing state-of-the-art unsupervised disentanglement methods exploiting both the values of the labels and the ordinal information that can be deduced from them. Overall, we empirically validate that with very little and potentially imprecise supervision it is possible to reliably learn disentangled representations.

READ FULL TEXT

page 2

page 5

page 20

research
07/28/2020

A Commentary on the Unsupervised Learning of Disentangled Representations

The goal of the unsupervised learning of disentangled representations is...
research
05/29/2019

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning

Disentangled representations have recently been shown to improve data ef...
research
06/29/2021

An Image is Worth More Than a Thousand Words: Towards Disentanglement in the Wild

Unsupervised disentanglement has been shown to be theoretically impossib...
research
12/15/2016

A Survey of Inductive Biases for Factorial Representation-Learning

With the resurgence of interest in neural networks, representation learn...
research
10/27/2020

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

The idea behind the unsupervised learning of disentangled representation...
research
04/10/2022

Towards efficient representation identification in supervised learning

Humans have a remarkable ability to disentangle complex sensory inputs (...
research
11/29/2018

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

In recent years, the interest in unsupervised learning of disentangled r...

Please sign up or login with your details

Forgot password? Click here to reset