Natural Gradient Variational Inference with Gaussian Mixture Models

11/15/2021
by   Farzaneh Mahdisoltani, et al.
0

Bayesian methods estimate a measure of uncertainty by using the posterior distribution. One source of difficulty in these methods is the computation of the normalizing constant. Calculating exact posterior is generally intractable and we usually approximate it. Variational Inference (VI) methods approximate the posterior with a distribution usually chosen from a simple family using optimization. The main contribution of this work is described is a set of update rules for natural gradient variational inference with mixture of Gaussians, which can be run independently for each of the mixture components, potentially in parallel.

READ FULL TEXT
research
02/26/2022

Variational Inference with Gaussian Mixture by Entropy Approximation

Variational inference is a technique for approximating intractable poste...
research
08/03/2020

Gibbs sampler and coordinate ascent variational inference: a set-theoretical review

A central task in Bayesian machine learning is the approximation of the ...
research
03/11/2021

Variational inference with a quantum computer

Inference is the task of drawing conclusions about unobserved variables ...
research
04/21/2023

Machine Learning and the Future of Bayesian Computation

Bayesian models are a powerful tool for studying complex data, allowing ...
research
05/18/2020

HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

We propose a framework called HyperVAE for encoding distributions of dis...
research
07/22/2011

Efficient variational inference in large-scale Bayesian compressed sensing

We study linear models under heavy-tailed priors from a probabilistic vi...
research
05/20/2019

MaxEntropy Pursuit Variational Inference

One of the core problems in variational inference is a choice of approxi...

Please sign up or login with your details

Forgot password? Click here to reset