Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization

12/19/2018
by   Xuefeng Gao, et al.
0

Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-convex objective as an efficient algorithm to find local minima while eventually visiting a global minimum on longer time-scales. LD is based on the first-order Langevin diffusion which is reversible in time. We study two variants that are based on non-reversible Langevin diffusions: the underdamped Langevin dynamics (ULD) and the Langevin dynamics with a non-symmetric drift (NLD). Adopting the techniques of Tzen, Liang and Raginsky (2018) for LD to non-reversible diffusions, we show that for a given local minimum that is within an arbitrary distance from the initialization, with high probability, either the ULD trajectory ends up somewhere outside a small neighborhood of this local minimum within a recurrence time which depends on the smallest eigenvalue of the Hessian at the local minimum or they enter this neighborhood by the recurrence time and stay there for a potentially exponentially long escape time. The ULD algorithms improve upon the recurrence time obtained for LD in Tzen, Liang and Raginsky (2018) with respect to the dependency on the smallest eigenvalue of the Hessian at the local minimum. Similar result and improvement are obtained for the NLD algorithm. We also show that non-reversible variants can exit the basin of attraction of a local minimum faster in discrete time when the objective has two local minima separated by a saddle point and quantify the amount of improvement. Our analysis suggests that non-reversible Langevin algorithms are more efficient to locate a local minimum as well as exploring the state space. Our analysis is based on the quadratic approximation of the objective around a local minimum. As a by-product of our analysis, we obtain optimal mixing rates for quadratic objectives in the 2-Wasserstein distance for two non-reversible Langevin algorithms we consider.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Non-Convex Stochastic Optimization via Non-Reversible Stochastic Gradient Langevin Dynamics

Stochastic gradient Langevin dynamics (SGLD) is a poweful algorithm for ...
research
02/18/2018

Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability

We study the detailed path-wise behavior of the discrete-time Langevin a...
research
11/17/2017

Neon2: Finding Local Minima via First-Order Oracles

We propose a reduction for non-convex optimization that can (1) turn a s...
research
05/19/2023

Accelerating Convergence in Global Non-Convex Optimization with Reversible Diffusion

Langevin Dynamics has been extensively employed in global non-convex opt...
research
02/02/2017

Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter

Given a nonconvex function f(x) that is an average of n smooth functions...
research
01/23/2020

Replica Exchange for Non-Convex Optimization

Gradient descent (GD) is known to converge quickly for convex objective ...
research
10/21/2020

Riemannian Langevin Algorithm for Solving Semidefinite Programs

We propose a Langevin diffusion-based algorithm for non-convex optimizat...

Please sign up or login with your details

Forgot password? Click here to reset