Self-Reflective Variational Autoencoder

07/10/2020
by   Ifigeneia Apostolopoulou, et al.
0

The Variational Autoencoder (VAE) is a powerful framework for learning probabilistic latent variable generative models. However, typical assumptions on the approximate posterior distribution of the encoder and/or the prior, seriously restrict its capacity for inference and generative modeling. Variational inference based on neural autoregressive models respects the conditional dependencies of the exact posterior, but this flexibility comes at a cost: such models are expensive to train in high-dimensional regimes and can be slow to produce samples. In this work, we introduce an orthogonal solution, which we call self-reflective inference. By redesigning the hierarchical structure of existing VAE architectures, self-reflection ensures that the stochastic flow preserves the factorization of the exact posterior, sequentially updating the latent codes in a recurrent manner consistent with the generative model. We empirically demonstrate the clear advantages of matching the variational posterior to the exact posterior - on binarized MNIST, self-reflective inference achieves state-of-the art performance without resorting to complex, computationally expensive components such as autoregressive layers. Moreover, we design a variational normalizing flow that employs the proposed architecture, yielding predictive benefits compared to its purely generative counterpart. Our proposed modification is quite general and complements the existing literature; self-reflective inference can naturally leverage advances in distribution estimation and generative modeling to improve the capacity of each layer in the hierarchy.

READ FULL TEXT
research
01/30/2019

Enhanced Variational Inference with Dyadic Transformation

Variational autoencoder is a powerful deep generative model with variati...
research
02/06/2019

BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling

With the introduction of the variational autoencoder (VAE), probabilisti...
research
09/01/2015

Importance Weighted Autoencoders

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently ...
research
06/01/2022

Top-down inference in an early visual cortex inspired hierarchical Variational Autoencoder

Interpreting computations in the visual cortex as learning and inference...
research
10/08/2020

Uncertainty in Neural Processes

We explore the effects of architecture and training objective choice on ...
research
05/19/2022

Foundation Posteriors for Approximate Probabilistic Inference

Probabilistic programs provide an expressive representation language for...
research
05/28/2018

Discrete flow posteriors for variational inference in discrete dynamical systems

Each training step for a variational autoencoder (VAE) requires us to sa...

Please sign up or login with your details

Forgot password? Click here to reset