Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing?

07/27/2023
by   Kyurae Kim, et al.
0

We prove that black-box variational inference (BBVI) with control variates, particularly the sticking-the-landing (STL) estimator, converges at a geometric (traditionally called "linear") rate under perfect variational family specification. In particular, we prove a quadratic bound on the gradient variance of the STL estimator, one which encompasses misspecified variational families. Combined with previous works on the quadratic variance condition, this directly implies convergence of BBVI with the use of projected stochastic gradient descent. We also improve existing analysis on the regular closed-form entropy gradient estimators, which enables comparison against the STL estimator and provides explicit non-asymptotic complexity guarantees for both.

READ FULL TEXT

page 9

page 10

page 11

page 13

page 23

research
06/04/2023

Provable convergence guarantees for black-box variational inference

While black-box variational inference is widely used, there is no proof ...
research
06/19/2019

Provable Gradient Variance Guarantees for Black-Box Variational Inference

Recent variational inference methods use stochastic gradient estimators ...
research
08/15/2023

Natural Evolution Strategies as a Black Box Estimator for Stochastic Variational Inference

Stochastic variational inference and its derivatives in the form of vari...
research
11/05/2019

A Rule for Gradient Estimator Selection, with an Application to Variational Inference

Stochastic gradient descent (SGD) is the workhorse of modern machine lea...
research
03/18/2023

Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference

Understanding the gradient variance of black-box variational inference (...
research
05/24/2023

Black-Box Variational Inference Converges

We provide the first convergence guarantee for full black-box variationa...
research
01/09/2023

Fast and Correct Gradient-Based Optimisation for Probabilistic Programming via Smoothing

We study the foundations of variational inference, which frames posterio...

Please sign up or login with your details

Forgot password? Click here to reset