Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning

12/22/2020
by   Andrew Lamperski, et al.
0

Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of O(T^-1/4(log T)^1/2) from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves ϵ-suboptimal solutions, on average, provided that it is run for a time that is polynomial in ϵ^-1 and slightly super-exponential in the problem dimension.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2022

Constrained Langevin Algorithms with L-mixing External Random Variables

Langevin algorithms are gradient descent methods augmented with additive...
research
03/25/2019

Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum versio...
research
02/13/2018

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for samp...
research
02/10/2022

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo

For the task of sampling from a density π∝exp(-V) on ℝ^d, where V is pos...
research
06/16/2020

Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm

We consider the task of sampling with respect to a log concave probabili...
research
02/21/2023

Exploring Local Norms in Exp-concave Statistical Learning

We consider the problem of stochastic convex optimization with exp-conca...
research
10/25/2022

A Dynamical System View of Langevin-Based Non-Convex Sampling

Non-convex sampling is a key challenge in machine learning, central to n...

Please sign up or login with your details

Forgot password? Click here to reset