Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

07/20/2017
by   Pan Xu, et al.
0

We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a new decomposition scheme of the optimization error, under which we directly analyze the ergodicity of the numerical approximations of Langevin dynamics and prove sharp convergence rates. We establish the first global convergence guarantee of gradient Langevin dynamics (GLD) with iteration complexity O(1/ϵ·(1/ϵ)). In addition, we improve the convergence rate of stochastic gradient Langevin dynamics (SGLD) to the "almost minimizer", which does not depend on the undesirable uniform spectral gap introduced in previous studies. Furthermore, we for the first time prove the global convergence guarantee of variance reduced stochastic gradient Langevin dynamics (VR-SGLD) with iteration complexity O(m/(Bϵ^3)·(1/ϵ)), where B is the mini-batch size and m is the length of the inner loop. We show that the gradient complexity of VR-SGLD is O(n^1/2/ϵ^3/2·(1/ϵ)), which outperforms O(n/ϵ·(1/ϵ)) gradient complexity of GLD, when the number of component functions satisfies n ≥ 1/ϵ. Our theoretical analysis shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2016

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance r...
research
02/12/2021

Stochastic Gradient Langevin Dynamics with Variance Reduction

Stochastic gradient Langevin dynamics (SGLD) has gained the attention of...
research
05/20/2017

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

In this paper, we study and analyze the mini-batch version of StochAstic...
research
02/13/2020

Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

We provide a nonasymptotic analysis of the convergence of the stochastic...
research
02/13/2018

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nons...
research
01/01/2021

On Stochastic Variance Reduced Gradient Method for Semidefinite Optimization

The low-rank stochastic semidefinite optimization has attracted rising a...
research
02/27/2017

Dropping Convexity for More Efficient and Scalable Online Multiview Learning

Multiview representation learning is very popular for latent factor anal...

Please sign up or login with your details

Forgot password? Click here to reset