DeepAI AI Chat
Log In Sign Up

Learning Model Reparametrizations: Implicit Variational Inference by Fitting MCMC distributions

by   Michalis K. Titsias, et al.

We introduce a new algorithm for approximate inference that combines reparametrization, Markov chain Monte Carlo and variational methods. We construct a very flexible implicit variational distribution synthesized by an arbitrary Markov chain Monte Carlo operation and a deterministic transformation that can be optimized using the reparametrization trick. Unlike current methods for implicit variational inference, our method avoids the computation of log density ratios and therefore it is easily applicable to arbitrary continuous and differentiable models. We demonstrate the proposed algorithm for fitting banana-shaped distributions and for training variational autoencoders.


MetFlow: A New Efficient Method for Bridging the Gap between Markov Chain Monte Carlo and Variational Inference

In this contribution, we propose a new computationally efficient method ...

Implicit copula variational inference

Key to effective generic, or "black-box", variational inference is the s...

Toward Unlimited Self-Learning Monte Carlo with Annealing Process Using VAE's Implicit Isometricity

Self-learning Monte Carlo (SLMC) methods are recently proposed to accele...

The Theory and Algorithm of Ergodic Inference

Approximate inference algorithm is one of the fundamental research field...

Geometric variational inference

Efficiently accessing the information contained in non-linear and high d...

Prior Density Learning in Variational Bayesian Phylogenetic Parameters Inference

The advances in variational inference are providing promising paths in B...

Approximate Inference for Constructing Astronomical Catalogs from Images

We present a new, fully generative model for constructing astronomical c...