A Non-Asymptotic Analysis for Stein Variational Gradient Descent

06/17/2020
by   Anna Korba, et al.
13

We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution π∝ e^-V on R^d. In the population limit, SVGD performs gradient descent in the space of probability distributions on the KL divergence with respect to π, where the gradient is smoothed through a kernel integral operator. In this paper, we provide a novel finite time analysis for the SVGD algorithm. We obtain a descent lemma establishing that the algorithm decreases the objective at each iteration, and provably converges, with less restrictive assumptions on the step size than required in earlier analyses. We further provide a guarantee on the convergence rate in Kullback-Leibler divergence, assuming π satisfies a Stein log-Sobolev inequality as in Duncan et al. (2019), which takes into account the geometry induced by the smoothed KL gradient.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2022

Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

Stein Variational Gradient Descent (SVGD) is an important alternative to...
research
04/25/2017

Stein Variational Gradient Descent as Gradient Flow

Stein variational gradient descent (SVGD) is a deterministic sampling al...
research
11/17/2022

A Finite-Particle Convergence Rate for Stein Variational Gradient Descent

We provide a first finite-particle convergence rate for Stein variationa...
research
05/20/2021

Kernel Stein Discrepancy Descent

Among dissimilarities between probability distributions, the Kernel Stei...
research
06/23/2021

Sampling with Mirrored Stein Operators

We introduce a new family of particle evolution samplers suitable for co...
research
09/07/2022

Riemannian optimization for non-centered mixture of scaled Gaussian distributions

This paper studies the statistical model of the non-centered mixture of ...
research
11/02/2020

Homeomorphic-Invariance of EM: Non-Asymptotic Convergence in KL Divergence for Exponential Families via Mirror Descent

Expectation maximization (EM) is the default algorithm for fitting proba...

Please sign up or login with your details

Forgot password? Click here to reset