Variational Refinement for Importance Sampling Using the Forward Kullback-Leibler Divergence

by   Ghassen Jerfel, et al.

Variational Inference (VI) is a popular alternative to asymptotically exact sampling in Bayesian inference. Its main workhorse is optimization over a reverse Kullback-Leibler divergence (RKL), which typically underestimates the tail of the posterior leading to miscalibration and potential degeneracy. Importance sampling (IS), on the other hand, is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures. The quality of IS crucially depends on the choice of the proposal distribution. Ideally, the proposal distribution has heavier tails than the target, which is rarely achievable by minimizing the RKL. We thus propose a novel combination of optimization and sampling techniques for approximate Bayesian inference by constructing an IS proposal distribution through the minimization of a forward KL (FKL) divergence. This approach guarantees asymptotic consistency and a fast convergence towards both the optimal IS estimator and the optimal variational approximation. We empirically demonstrate on real data that our method is competitive with variational boosting and MCMC.



page 7


Refined α-Divergence Variational Inference via Rejection Sampling

We present an approximate inference method, based on a synergistic combi...

Parallel Tempering With a Variational Reference

Sampling from complex target distributions is a challenging task fundame...

Pseudo-Mallows for Efficient Probabilistic Preference Learning

We propose the Pseudo-Mallows distribution over the set of all permutati...

Nested Variational Inference

We develop nested variational inference (NVI), a family of methods that ...

Stein Variational Adaptive Importance Sampling

We propose a novel adaptive importance sampling algorithm which incorpor...

Variational Inference for Deblending Crowded Starfields

In the image data collected by astronomical surveys, stars and galaxies ...

Variational Inference with Tail-adaptive f-Divergence

Variational inference with α-divergences has been widely used in modern ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.