Nonconvex sampling with the Metropolis-adjusted Langevin algorithm

02/22/2019
by   Oren Mangoubi, et al.
0

The Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications. Despite this, current bounds for the Langevin algorithms are slower than those of competing algorithms in many important situations, for instance when sampling from weakly log-concave distributions, or when sampling or optimizing non-convex log-densities. In this paper, we obtain improved bounds in many of these situations, showing that the Metropolis-adjusted Langevin algorithm (MALA) is faster than the best bounds for its competitor algorithms when the target distribution satisfies weak third- and fourth- order regularity properties associated with the input data. Our regularity conditions are weaker than the usual Euclidean operator norm regularity properties, allowing us to show faster bounds for a much larger class of distributions than would be possible with the usual Euclidean operator norm approach, including in statistics and machine learning applications where the data satisfy a certain incoherence condition. In particular, we show that using our regularity conditions one can obtain faster bounds for applications which include sampling problems in Bayesian logistic regression with weakly convex priors, and the nonconvex optimization problem of learning linear classifiers with zero-one loss functions. Our main technical contribution in this paper is our analysis of the Metropolis acceptance probability of MALA in terms of its "energy-conservation error," and our bound for this error in terms of third- and fourth- order regularity conditions. Our combination of this higher-order analysis of the energy conservation error with the conductance method is key to obtaining bounds which have a sub-linear dependence on the dimension d in the non-strongly logconcave setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2018

Dimensionally Tight Bounds for Second-Order Hamiltonian Monte Carlo

Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from...
research
02/24/2018

Dimensionally Tight Running Time Bounds for Second-Order Hamiltonian Monte Carlo

Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from...
research
02/21/2019

Online Sampling from Log-Concave Distributions

Given a sequence of convex functions f_0, f_1, ..., f_T, we study the pr...
research
10/12/2022

Quantum Algorithms for Sampling Log-Concave Distributions and Estimating Normalizing Constants

Given a convex function fℝ^d→ℝ, the problem of sampling from a distribut...
research
04/05/2023

Optimal Sketching Bounds for Sparse Linear Regression

We study oblivious sketching for k-sparse linear regression under variou...
research
11/20/2018

Sampling Can Be Faster Than Optimization

Optimization algorithms and Monte Carlo sampling algorithms have provide...
research
12/13/2020

Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds

We propose novel randomized optimization methods for high-dimensional co...

Please sign up or login with your details

Forgot password? Click here to reset