-
A Divergence Bound for Hybrids of MCMC and Variational Inference and an Application to Langevin Dynamics and SGVI
Two popular classes of methods for approximate inference are Markov chai...
read it
-
A Contrastive Divergence for Combining Variational Inference and MCMC
We develop a method to combine Markov chain Monte Carlo (MCMC) and varia...
read it
-
Variational Deep Q Network
We propose a framework that directly tackles the probability distributio...
read it
-
Batch Stationary Distribution Estimation
We consider the problem of approximating the stationary distribution of ...
read it
-
Approximate Inference for Constructing Astronomical Catalogs from Images
We present a new, fully generative model for constructing astronomical c...
read it
-
Metropolis-Hastings view on variational inference and adversarial training
In this paper we propose to view the acceptance rate of the Metropolis-H...
read it
-
Pangloss: a novel Markov chain prefetcher
We present Pangloss, an efficient high-performance data prefetcher that ...
read it
MCMC-Interactive Variational Inference
Leveraging well-established MCMC strategies, we propose MCMC-interactive variational inference (MIVI) to not only estimate the posterior in a time constrained manner, but also facilitate the design of MCMC transitions. Constructing a variational distribution followed by a short Markov chain that has parameters to learn, MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement. On one hand, with the variational distribution locating high posterior density regions, the Markov chain is optimized within the variational inference framework to efficiently target the posterior despite a small number of transitions. On the other hand, the optimized Markov chain with considerable flexibility guides the variational distribution towards the posterior and alleviates its underestimation of uncertainty. Furthermore, we prove the optimized Markov chain in MIVI admits extrapolation, which means its marginal distribution gets closer to the true posterior as the chain grows. Therefore, the Markov chain can be used separately as an efficient MCMC scheme. Experiments show that MIVI not only accurately and efficiently approximates the posteriors but also facilitates designs of stochastic gradient MCMC and Gibbs sampling transitions.
READ FULL TEXT
Comments
There are no comments yet.