Identifying through Flows for Recovering Latent Representations

09/27/2019
by   Shen Li, et al.
8

Identifiability, or recovery of the true latent representations from which the observed data originates, is a fundamental goal of representation learning. However, most deep generative models do not address the question of identifiability, and cannot recover the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. However, due to the intractablity of KL divergence between variational approximate posterior and the true posterior, iVAE has to maximize the evidence lower bound of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, without the need for variational approximations. We derive its learning objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods.

READ FULL TEXT

page 7

page 12

page 13

page 14

page 15

research
06/17/2020

Analytical Probability Distributions and EM-Learning for Deep Generative Networks

Deep Generative Networks (DGNs) with probabilistic modeling of their out...
research
11/01/2022

Improving Variational Autoencoders with Density Gap-based Regularization

Variational autoencoders (VAEs) are one of the powerful unsupervised lea...
research
01/10/2019

Preventing Posterior Collapse with delta-VAEs

Due to the phenomenon of "posterior collapse," current latent variable g...
research
09/30/2019

Tightening Bounds for Variational Inference by Revisiting Perturbation Theory

Variational inference has become one of the most widely used methods in ...
research
12/11/2019

Multimodal Generative Models for Compositional Representation Learning

As deep neural networks become more adept at traditional tasks, many of ...
research
05/14/2021

Adapting deep generative approaches for getting synthetic data with realistic marginal distributions

Synthetic data generation is of great interest in diverse applications, ...
research
09/02/2019

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

When trained effectively, the Variational Autoencoder (VAE) is both a po...

Please sign up or login with your details

Forgot password? Click here to reset