Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference

12/01/2017
by   Stefan Webb, et al.
0

In learning deep generative models, the encoder for variational inference is typically formed in an ad hoc manner with a structure and parametrization analogous to the forward model. Our chief insight is that this results in coarse approximations to the posterior, and that the d-separation properties of the BN structure of the forward model should be used, in a principled way, to produce ones that are faithful to the posterior, for which we introduce the novel Compact Minimal I-map (CoMI) algorithm. Applying our method to common models reveals that standard encoder design choices lack many important edges, and through experiments we demonstrate that modelling these edges is important for optimal learning. We show how using a faithful encoder is crucial when modelling with continuous relaxations of categorical distributions.

READ FULL TEXT
research
04/17/2019

Effective Estimation of Deep Generative Language Models

Advances in variational inference enable parameterisation of probabilist...
research
07/12/2022

Scalable Bayesian Inference for Detection and Deblending in Astronomical Images

We present a new probabilistic method for detecting, deblending, and cat...
research
06/28/2019

The Thermodynamic Variational Objective

We introduce the thermodynamic variational objective (TVO) for learning ...
research
07/24/2018

Iterative Amortized Inference

Inference models are a key component in scaling variational inference to...
research
06/13/2017

Generative Models for Learning from Crowds

In this paper, we propose generative probabilistic models for label aggr...
research
10/26/2021

Relay Variational Inference: A Method for Accelerated Encoderless VI

Variational Inference (VI) offers a method for approximating intractable...

Please sign up or login with your details

Forgot password? Click here to reset