Bayesic
Probabilistic programming for large datasets, via stochastic variational inference
view repo
Mean-field variational inference is a method for approximate Bayesian posterior inference. It approximates a full posterior distribution with a factorized set of distributions by maximizing a lower bound on the marginal likelihood. This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution. Often not all integrals are in closed form, which is typically handled by using a lower bound. We present an alternative algorithm based on stochastic optimization that allows for direct optimization of the variational lower bound. This method uses control variates to reduce the variance of the stochastic search gradient, in which existing lower bounds can play an important role. We demonstrate the approach on two non-conjugate models: logistic regression and an approximation to the HDP.
READ FULL TEXT
In Bayesian machine learning, the posterior distribution is typically
co...
read it
We show that unconverged stochastic gradient descent can be interpreted ...
read it
Modern statistical applications involving large data sets have focused
a...
read it
Variational Bayesian Inference is a popular methodology for approximatin...
read it
The so-called reparameterization trick is widely used in variational
inf...
read it
Variational methods are widely used for approximate posterior inference....
read it
In this note we consider setups in which variational objectives for Baye...
read it
Probabilistic programming for large datasets, via stochastic variational inference
Comments
There are no comments yet.