Stochastic Gradient Langevin with Delayed Gradients

06/12/2020
by   Vyacheslav Kungurtsev, et al.
0

Stochastic Gradient Langevin Dynamics (SGLD) ensures strong guarantees with regards to convergence in measure for sampling log-concave posterior distributions by adding noise to stochastic gradient iterates. Given the size of many practical problems, parallelizing across several asynchronously running processors is a popular strategy for reducing the end-to-end computation time of stochastic optimization algorithms. In this paper, we are the first to investigate the effect of asynchronous computation, in particular, the evaluation of stochastic Langevin gradients at delayed iterates, on the convergence in measure. For this, we exploit recent results modeling Langevin dynamics as solving a convex optimization problem on the space of measures. We show that the rate of convergence in measure is not significantly affected by the error caused by the delayed gradient information used for computation, suggesting significant potential for speedup in wall clock time. We confirm our theoretical results with numerical experiments on some practical problems.

READ FULL TEXT

page 7

page 12

research
10/19/2020

Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

We establish a new convergence analysis of stochastic gradient Langevin ...
research
04/06/2020

Non-Convex Stochastic Optimization via Non-Reversible Stochastic Gradient Langevin Dynamics

Stochastic gradient Langevin dynamics (SGLD) is a poweful algorithm for ...
research
01/04/2018

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation

Gradient Symbolic Computation is proposed as a means of solving discrete...
research
08/04/2015

Asynchronous stochastic convex optimization

We show that asymptotically, completely asynchronous stochastic gradient...
research
05/23/2018

Adaptive Stochastic Gradient Langevin Dynamics: Taming Convergence and Saddle Point Escape Time

In this paper, we propose a new adaptive stochastic gradient Langevin dy...
research
04/28/2011

Distributed Delayed Stochastic Optimization

We analyze the convergence of gradient-based optimization algorithms tha...
research
02/06/2023

U-Clip: On-Average Unbiased Stochastic Gradient Clipping

U-Clip is a simple amendment to gradient clipping that can be applied to...

Please sign up or login with your details

Forgot password? Click here to reset