
MetFlow: A New Efficient Method for Bridging the Gap between Markov Chain Monte Carlo and Variational Inference
In this contribution, we propose a new computationally efficient method ...
read it

Increasing the efficiency of Sequential Monte Carlo samplers through the use of approximately optimal Lkernels
By facilitating the generation of samples from arbitrary probability dis...
read it

Multiple projection MCMC algorithms on submanifolds
We propose new Markov Chain Monte Carlo algorithms to sample probability...
read it

Approximation Properties of Variational Bayes for Vector Autoregressions
Variational Bayes (VB) is a recent approximate method for Bayesian infer...
read it

Decayed MCMC Filtering
Filteringestimating the state of a partially observable Markov proces...
read it

Strategic Monte Carlo Methods for State and Parameter Estimation in High Dimensional Nonlinear Problems
In statistical data assimilation one seeks the largest maximum of the co...
read it

Quadraturebased features for kernel approximation
We consider the problem of improving kernel approximation via randomized...
read it
Variational MCMC
We propose a new class of learning algorithms that combines variational approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms that use the variational approximation as proposal distribution can perform poorly because this approximation tends to underestimate the true variance and other features of the data. We solve this problem by introducing more sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC kernels: a random walk Metropolis kernel and a blockMetropolisHastings (MH) kernel with a variational approximation as proposaldistribution. The MH kernel allows one to locate regions of high probability efficiently. The Metropolis kernel allows us to explore the vicinity of these regions. This algorithm outperforms variationalapproximations because it yields slightly better estimates of the mean and considerably better estimates of higher moments, such as covariances. It also outperforms standard MCMC algorithms because it locates theregions of high probability quickly, thus speeding up convergence. We demonstrate this algorithm on the problem of Bayesian parameter estimation for logistic (sigmoid) belief networks.
READ FULL TEXT
Comments
There are no comments yet.