Covariate-informed Representation Learning with Samplewise Optimal Identifiable Variational Autoencoders

02/09/2022
by   Young Geun Kim, et al.
0

Recently proposed identifiable variational autoencoder (iVAE, Khemakhem et al. (2020)) framework provides a promising approach for learning latent independent components of the data. Although the identifiability is appealing, the objective function of iVAE does not enforce the inverse relation between encoders and decoders. Without the inverse relation, representations from the encoder in iVAE may not reconstruct observations,i.e., representations lose information in observations. To overcome this limitation, we develop a new approach, covariate-informed identifiable VAE (CI-iVAE). Different from previous iVAE implementations, our method critically leverages the posterior distribution of latent variables conditioned only on observations. In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations. Furthermore, CI-iVAE extends the original iVAE objective function to a larger class and finds the optimal one among them, thus providing a better fit to the data. Theoretically, our method has tighter evidence lower bounds (ELBOs) than the original iVAE. We demonstrate that our approach can more reliably learn features of various synthetic datasets, two benchmark image datasets (EMNIST and Fashion MNIST), and a large-scale brain imaging dataset for adolescent mental health research.

READ FULL TEXT

page 17

page 18

page 19

research
09/06/2023

CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse

The Variational Autoencoder (VAE) is known to suffer from the phenomenon...
research
10/29/2019

Bridging the ELBO and MMD

One of the challenges in training generative models such as the variatio...
research
09/26/2022

FONDUE: an algorithm to find the optimal dimensionality of the latent representations of variational autoencoders

When training a variational autoencoder (VAE) on a given dataset, determ...
research
04/30/2020

Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Variational autoencoders (VAEs) are a standard framework for inducing la...
research
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...
research
07/19/2022

Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse

Variational autoencoders (VAEs) suffer from posterior collapse, where th...
research
05/13/2019

Learning Hierarchical Priors in VAEs

We propose to learn a hierarchical prior in the context of variational a...

Please sign up or login with your details

Forgot password? Click here to reset