Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization

03/30/2022
by   Yuri Kinoshita, et al.
0

The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve sampling problems and non-convex optimization appearing in several machine learning applications. Especially, its variance reduced versions have nowadays gained particular attention. In this paper, we study two variants of this kind, namely, the Stochastic Variance Reduced Gradient Langevin Dynamics and the Stochastic Recursive Gradient Langevin Dynamics. We prove their convergence to the objective distribution in terms of KL-divergence under the sole assumptions of smoothness and Log-Sobolev inequality which are weaker conditions than those used in prior works for these algorithms. With the batch size and the inner loop length set to √(n), the gradient complexity to achieve an ϵ-precision is Õ((n+dn^1/2ϵ^-1)γ^2 L^2α^-2), which is an improvement from any previous analyses. We also show some essential applications of our result to non-convex optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2023

Variance-reduced Clipping for Non-convex Optimization

Gradient clipping is a standard training technique used in deep learning...
research
11/30/2021

Optimal friction matrix for underdamped Langevin sampling

A systematic procedure for optimising the friction coefficient in underd...
research
11/29/2022

Closing the gap between SVRG and TD-SVRG with Gradient Splitting

Temporal difference (TD) learning is a simple algorithm for policy evalu...
research
12/11/2018

On the Ineffectiveness of Variance Reduced Optimization for Deep Learning

The application of stochastic variance reduction to optimization has sho...
research
05/29/2019

Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data

Stochastic variance reduced methods have gained a lot of interest recent...
research
10/25/2022

A Dynamical System View of Langevin-Based Non-Convex Sampling

Non-convex sampling is a key challenge in machine learning, central to n...
research
01/01/2021

On Stochastic Variance Reduced Gradient Method for Semidefinite Optimization

The low-rank stochastic semidefinite optimization has attracted rising a...

Please sign up or login with your details

Forgot password? Click here to reset