Learning Optimal Conditional Priors For Disentangled Representations

10/19/2020
by   Graziano Mita, et al.
0

A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be obtained in a fully unsupervised setting without inductive biases on models and data. As such, Khemakhem et al., AISTATS 2020, suggest employing a factorized prior distribution over the latent variables that is conditionally dependent on auxiliary observed variables complementing input observations. While this is a remarkable advancement toward model identifiability, the learned conditional prior only focuses on sufficiency, giving no guarantees on a minimal representation. Motivated by information theoretic principles, we propose a novel VAE-based generative model with theoretical guarantees on disentanglement. Our proposed model learns a sufficient and compact - thus optimal - conditional prior, which serves as regularization for the latent space. Experimental results indicate superior performance with respect to state-of-the-art methods, according to several established metrics proposed in the literature on disentanglement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2021

InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via Intermediary Latents

We introduce a simple and effective method for learning VAEs with contro...
research
11/25/2019

Improving VAE generations of multimodal data through data-dependent conditional priors

One of the major shortcomings of variational autoencoders is the inabili...
research
02/05/2019

Relevance Factor VAE: Learning and Identifying Disentangled Factors

We propose a novel VAE-based deep auto-encoder model that can learn dise...
research
09/10/2019

Learning Priors for Adversarial Autoencoders

Most deep latent factor models choose simple priors for simplicity, trac...
research
04/11/2019

Variational AutoEncoder For Regression: Application to Brain Aging Analysis

While unsupervised variational autoencoders (VAE) have become a powerful...
research
02/28/2023

Representation Disentaglement via Regularization by Identification

This work focuses on the problem of learning disentangled representation...
research
09/15/2021

Disentangling Generative Factors of Physical Fields Using Variational Autoencoders

The ability to extract generative parameters from high-dimensional field...

Please sign up or login with your details

Forgot password? Click here to reset