Scaling limit of the Stein variational gradient descent part I: the mean field regime

05/10/2018
by   Jianfeng Lu, et al.
0

We study an interacting particle system in R^d motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization. We prove that in the large particle limit the empirical measure converges to a solution of a non-local and nonlinear PDE. We also prove global well-posedness and uniqueness of the solution to the limiting PDE. Finally, we prove that the solution to the PDE converges to the unique invariant solution in large time limit.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2019

Global convergence of neuron birth-death dynamics

Neural networks with a large number of parameters admit a mean-field des...
research
05/23/2023

Towards Understanding the Dynamics of Gaussian-Stein Variational Gradient Descent

Stein Variational Gradient Descent (SVGD) is a nonparametric particle-ba...
research
05/17/2022

Non-mean-field Vicsek-type models for collective behaviour

We consider interacting particle dynamics with Vicsek type interactions,...
research
02/25/2022

A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling

As a counterpoint to classical stochastic particle methods for linear di...
research
02/25/2021

Stein Variational Gradient Descent: many-particle and long-time asymptotics

Stein variational gradient descent (SVGD) refers to a class of methods f...
research
05/30/2021

Overparameterization of deep ResNet: zero loss and mean-field analysis

Finding parameters in a deep neural network (NN) that fit training data ...
research
10/15/2017

The Scaling Limit of High-Dimensional Online Independent Component Analysis

We analyze the dynamics of an online algorithm for independent component...

Please sign up or login with your details

Forgot password? Click here to reset