Tightening Bounds for Variational Inference by Revisiting Perturbation Theory

by   Robert Bamler, et al.
University of California, Irvine
Berlin Institute of Technology (Technische Universität Berlin)

Variational inference has become one of the most widely used methods in latent variable modeling. In its basic form, variational inference employs a fully factorized variational distribution and minimizes its KL divergence to the posterior. As the minimization can only be carried out approximately, this approximation induces a bias. In this paper, we revisit perturbation theory as a powerful way of improving the variational approximation. Perturbation theory relies on a form of Taylor expansion of the log marginal likelihood, vaguely in terms of the log ratio of the true posterior and its variational approximation. While first order terms give the classical variational bound, higher-order terms yield corrections that tighten it. However, traditional perturbation theory does not provide a lower bound, making it inapt for stochastic optimization. In this paper, we present a similar yet alternative way of deriving corrections to the ELBO that resemble perturbation theory, but that result in a valid bound. We show in experiments on Gaussian Processes and Variational Autoencoders that the new bounds are more mass covering, and that the resulting posterior covariances are closer to the true posterior and lead to higher likelihoods on held-out data.


page 1

page 2

page 3

page 4


Perturbative Black Box Variational Inference

Black box variational inference (BBVI) with reparameterization gradients...

Variational Bayesian Inference with Stochastic Search

Mean-field variational inference is a method for approximate Bayesian po...

Bayesian brains and the Rényi divergence

Under the Bayesian brain hypothesis, behavioural variations can be attri...

Recursive Inference for Variational Autoencoders

Inference networks of traditional Variational Autoencoders (VAEs) are ty...

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...

Variational Encoders and Autoencoders : Information-theoretic Inference and Closed-form Solutions

This work develops problem statements related to encoders and autoencode...

Inference Suboptimality in Variational Autoencoders

Amortized inference has led to efficient approximate inference for large...

Please sign up or login with your details

Forgot password? Click here to reset