Disentangled Representations from Non-Disentangled Models

02/11/2021
by   Valentin Khrulkov, et al.
6

Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario. The dominating paradigm of unsupervised disentanglement is currently to train a generative model that separates different factors of variation in its latent space. This separation is typically enforced by training with specific regularization terms in the model's objective function. These terms, however, introduce additional hyperparameters responsible for the trade-off between disentanglement and generation quality. While tuning these hyperparameters is crucial for proper disentanglement, it is often unclear how to tune them without external supervision. This paper investigates an alternative route to disentangled representations. Namely, we propose to extract such representations from the state-of-the-art generative models trained without disentangling terms in their objectives. This paradigm of post hoc disentanglement employs little or no hyperparameters when learning representations while achieving results on par with existing state-of-the-art, as shown by comparison in terms of established disentanglement metrics, fairness, and the abstract reasoning task. All our code and models are publicly available.

READ FULL TEXT

page 8

page 14

page 15

page 16

page 17

page 18

page 19

research
02/21/2021

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

Disentangled generative models are typically trained with an extra regul...
research
05/29/2019

Are Disentangled Representations Helpful for Abstract Visual Reasoning?

A disentangled representation encodes information about the salient fact...
research
11/12/2018

Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations

In this work we explore the generalization characteristics of unsupervis...
research
08/14/2021

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

Disentangled visual representations have largely been studied with gener...
research
03/02/2023

DAVA: Disentangling Adversarial Variational Autoencoder

The use of well-disentangled representations offers many advantages for ...
research
06/05/2020

Evaluating the Disentanglement of Deep Generative Models through Manifold Topology

Learning disentangled representations is regarded as a fundamental task ...
research
08/17/2021

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation

Unsupervised disentanglement learning is a crucial issue for understandi...

Please sign up or login with your details

Forgot password? Click here to reset