MaxEntropy Pursuit Variational Inference

05/20/2019
by   Evgenii Egorov, et al.
0

One of the core problems in variational inference is a choice of approximate posterior distribution. It is crucial to trade-off between efficient inference with simple families as mean-field models and accuracy of inference. We propose a variant of a greedy approximation of the posterior distribution with tractable base learners. Using Max-Entropy approach, we obtain a well-defined optimization problem. We demonstrate the ability of the method to capture complex multimodal posterior via continual learning setting for neural networks.

READ FULL TEXT
research
05/21/2015

Variational Inference with Normalizing Flows

The choice of approximate posterior distribution is one of the core prob...
research
11/15/2021

Natural Gradient Variational Inference with Gaussian Mixture Models

Bayesian methods estimate a measure of uncertainty by using the posterio...
research
08/16/2021

Variational Inference at Glacier Scale

We characterize the complete joint posterior distribution over spatially...
research
05/27/2022

Clustering Functional Data via Variational Inference

Functional data analysis deals with data that are recorded densely over ...
research
01/16/2013

Dynamic Trees: A Structured Variational Method Giving Efficient Propagation Rules

Dynamic trees are mixtures of tree structured belief networks. They solv...
research
01/10/2018

Inference Suboptimality in Variational Autoencoders

Amortized inference has led to efficient approximate inference for large...
research
03/04/2015

Bethe Projections for Non-Local Inference

Many inference problems in structured prediction are naturally solved by...

Please sign up or login with your details

Forgot password? Click here to reset