Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory

09/05/2018
by   Jianyi Zhang, et al.
0

Particle-optimization sampling (POS) is a recently developed technique to generate high-quality samples from a target distribution by iteratively updating a set of interactive particles. A representative algorithm is the Stein variational gradient descent (SVGD). Though obtaining significant empirical success, the non-asymptotic convergence behavior of SVGD remains unknown. In this paper, we generalize POS to a stochasticity setting by injecting random noise in particle updates, called stochastic particle-optimization sampling (SPOS). Standard SVGD can be regarded as a special case of our framework. Notably, for the first time, we develop non-asymptotic convergence theory for the SPOS framework (which includes SVGD), characterizing the bias of a sample approximation w.r.t. the numbers of particles and iterations under both convex- and noncovex-energy-function settings. Remarkably, we provide theoretical understand of a pitfall of SVGD that can be avoided in the proposed SPOS framework, i.e., particles tent to collapse to a local mode in SVGD under some particular conditions. Our theory is based on the analysis of nonlinear stochastic differential equations, which serves as an extension and a complemented development to the asymptotic convergence theory for SVGD such as [1].

READ FULL TEXT
research
11/20/2018

Variance Reduction in Stochastic Particle-Optimization Sampling

Stochastic particle-optimization sampling (SPOS) is a recently-developed...
research
01/24/2021

Annealed Stein Variational Gradient Descent

Particle based optimization algorithms have recently been developed as s...
research
03/15/2022

Regenerative Particle Thompson Sampling

This paper proposes regenerative particle Thompson sampling (RPTS), a fl...
research
05/27/2023

Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation

Stein Variational Gradient Descent (SVGD) is a popular variational infer...
research
02/09/2023

Efficient displacement convex optimization with particle gradient descent

Particle gradient descent, which uses particles to represent a probabili...
research
06/04/2022

Stochastic Multiple Target Sampling Gradient Descent

Sampling from an unnormalized target distribution is an essential proble...
research
01/30/2023

Reweighted Interacting Langevin Diffusions: an Accelerated Sampling Methodfor Optimization

We proposed a new technique to accelerate sampling methods for solving d...

Please sign up or login with your details

Forgot password? Click here to reset