Understanding disentangling in β-VAE

04/10/2018
by   Christopher P Burgess, et al.
0

We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. Taking a rate-distortion theory perspective, we show the circumstances under which representations aligned with the underlying generative factors of variation of data emerge when optimising the modified ELBO bound in β-VAE, as training progresses. From these insights, we propose a modification to the training regime of β-VAE, that progressively increases the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in β-VAE, without the previous trade-off in reconstruction accuracy.

READ FULL TEXT

page 5

page 6

page 7

research
07/13/2020

PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders

Although substantial efforts have been made to learn disentangled repres...
research
02/12/2021

Demystifying Inductive Biases for β-VAE Based Architectures

The performance of β-Variational-Autoencoders (β-VAEs) and their variant...
research
02/27/2022

Data Overlap: A Prerequisite For Disentanglement

Learning disentangled representations with variational autoencoders (VAE...
research
12/17/2018

Variational Autoencoders Pursue PCA Directions (by Accident)

The Variational Autoencoder (VAE) is a powerful architecture capable of ...
research
02/16/2018

Disentangling by Factorising

We define and address the problem of unsupervised learning of disentangl...
research
09/03/2019

Improving Disentangled Representation Learning with the Beta Bernoulli Process

To improve the ability of VAE to disentangle in the latent space, existi...
research
05/21/2018

Invariant Representations from Adversarially Censored Autoencoders

We combine conditional variational autoencoders (VAE) with adversarial c...

Please sign up or login with your details

Forgot password? Click here to reset