Robust Disentanglement of a Few Factors at a Time

10/26/2020
by   Benjamin Estermann, et al.
7

Disentanglement is at the forefront of unsupervised learning, as disentangled representations of data improve generalization, interpretability, and performance in downstream tasks. Current unsupervised approaches remain inapplicable for real-world datasets since they are highly variable in their performance and fail to reach levels of disentanglement of (semi-)supervised approaches. We introduce population-based training (PBT) for improving consistency in training variational autoencoders (VAEs) and demonstrate the validity of this approach in a supervised setting (PBT-VAE). We then use Unsupervised Disentanglement Ranking (UDR) as an unsupervised heuristic to score models in our PBT-VAE training and show how models trained this way tend to consistently disentangle only a subset of the generative factors. Building on top of this observation we introduce the recursive rPU-VAE approach. We train the model until convergence, remove the learned factors from the dataset and reiterate. In doing so, we can label subsets of the dataset with the learned factors and consecutively use these labels to train one model that fully disentangles the whole dataset. With this approach, we show striking improvement in state-of-the-art unsupervised disentanglement performance and robustness across multiple datasets and metrics.

READ FULL TEXT

page 1

page 2

page 3

page 7

page 10

page 11

page 12

page 14

research
07/14/2020

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
11/24/2017

JADE: Joint Autoencoders for Dis-Entanglement

The problem of feature disentanglement has been explored in the literatu...
research
10/31/2018

Interventional Robustness of Deep Latent Variable Models

The ability to learn disentangled representations that split underlying ...
research
02/20/2021

GroupifyVAE: from Group-based Definition to VAE-based Unsupervised Representation Disentanglement

The key idea of the state-of-the-art VAE-based unsupervised representati...
research
12/14/2018

Learning Latent Subspaces in Variational Autoencoders

Variational autoencoders (VAEs) are widely used deep generative models c...
research
05/18/2023

Unsupervised Multi-channel Separation and Adaptation

A key challenge in machine learning is to generalize from training data ...
research
11/15/2019

Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

Variational AutoEncoders (VAEs) provide a means to generate representati...

Please sign up or login with your details

Forgot password? Click here to reset