DeepAI AI Chat
Log In Sign Up

Annealed Stein Variational Gradient Descent

by   Francesco D'Angelo, et al.

Particle based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution. In particular Stein variational gradient descent has gained attention in the approximate inference literature for its flexibility and accuracy. We empirically explore the ability of this method to sample from multi-modal distributions and focus on two important issues: (i) the inability of the particles to escape from local modes and (ii) the inefficacy in reproducing the density of the different regions. We propose an annealing schedule to solve these issues and show, through various experiments, how this simple solution leads to significant improvements in mode coverage, without invalidating any theoretical properties of the original algorithm.


page 6

page 9


Stochastic Multiple Target Sampling Gradient Descent

Sampling from an unnormalized target distribution is an essential proble...

Frank-Wolfe Stein Sampling

In Bayesian inference, the posterior distributions are difficult to obta...

Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory

Particle-optimization sampling (POS) is a recently developed technique t...

Stein Variational Gradient Descent as Moment Matching

Stein variational gradient descent (SVGD) is a non-parametric inference ...

Particle gradient descent model for point process generation

This paper introduces a generative model for planar point processes in a...

Stein Variational Gradient Descent Without Gradient

Stein variational gradient decent (SVGD) has been shown to be a powerful...