A Dynamical System View of Langevin-Based Non-Convex Sampling

10/25/2022
by   Mohammad Reza Karimi, et al.
0

Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference. Despite its significance, theoretically there remain many important challenges: Existing guarantees (1) typically only hold for the averaged iterates rather than the more desirable last iterates, (2) lack convergence metrics that capture the scales of the variables such as Wasserstein distances, and (3) mainly apply to elementary schemes such as stochastic gradient Langevin dynamics. In this paper, we develop a new framework that lifts the above issues by harnessing several tools from the theory of dynamical systems. Our key result is that, for a large class of state-of-the-art sampling schemes, their last-iterate convergence in Wasserstein distances can be reduced to the study of their continuous-time counterparts, which is much better understood. Coupled with standard assumptions of MCMC sampling, our theory immediately yields the last-iterate Wasserstein convergence of many advanced sampling schemes such as proximal, randomized mid-point, and Runge-Kutta integrators. Beyond existing methods, our framework also motivates more efficient schemes that enjoy the same rigorous guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2017

Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis

Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of Sto...
research
03/30/2022

Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization

The stochastic gradient Langevin Dynamics is one of the most fundamental...
research
07/06/2022

Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting

We consider the problem of sampling from a high-dimensional target distr...
research
11/20/2018

Variance Reduction in Stochastic Particle-Optimization Sampling

Stochastic particle-optimization sampling (SPOS) is a recently-developed...
research
12/22/2020

Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning

Langevin algorithms are gradient descent methods with additive noise. Th...
research
10/02/2020

Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction

Replica exchange stochastic gradient Langevin dynamics (reSGLD) has show...
research
07/04/2013

Toward Guaranteed Illumination Models for Non-Convex Objects

Illumination variation remains a central challenge in object detection a...

Please sign up or login with your details

Forgot password? Click here to reset