Asynchronous parallel adaptive stochastic gradient methods

02/21/2020
by   Yangyang Xu, et al.
0

Stochastic gradient methods (SGMs) are the predominant approaches to train deep learning models. The adaptive versions (e.g., Adam and AMSGrad) have been extensively used in practice, partly because they achieve faster convergence than the non-adaptive versions while incurring little overhead. On the other hand, asynchronous (async) parallel computing has exhibited much better speed-up over its synchronous (sync) counterpart. However, async-parallel implementation has only been demonstrated to the non-adaptive SGMs. The difficulty for adaptive SGMs originates from the second moment term that makes the convergence analysis challenging with async updates. In this paper, we propose an async-parallel adaptive SGM based on AMSGrad. We show that the proposed method inherits the convergence guarantee of AMSGrad for both convex and non-convex problems, if the staleness (also called delay) caused by asynchrony is bounded. Our convergence rate results indicate a nearly linear parallelization speed-up if τ=o(K^1/4), where τ is the staleness and K is the number of iterations. The proposed method is tested on both convex and non-convex machine learning problems, and the numerical results demonstrate its clear advantages over the sync counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2018

Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization

We study stochastic algorithms for solving non-convex optimization probl...
research
05/23/2018

Adaptive Stochastic Gradient Langevin Dynamics: Taming Convergence and Saddle Point Escape Time

In this paper, we propose a new adaptive stochastic gradient Langevin dy...
research
07/24/2021

Distributed stochastic inertial methods with delayed derivatives

Stochastic gradient methods (SGMs) are predominant approaches for solvin...
research
11/17/2022

Escaping From Saddle Points Using Asynchronous Coordinate Gradient Descent

Large-scale non-convex optimization problems are expensive to solve due ...
research
05/31/2021

Generalized AdaGrad (G-AdaGrad) and Adam: A State-Space Perspective

Accelerated gradient-based methods are being extensively used for solvin...
research
06/04/2022

A Control Theoretic Framework for Adaptive Gradient Optimizers in Machine Learning

Adaptive gradient methods have become popular in optimizing deep neural ...
research
01/12/2018

Asynchronous Stochastic Variational Inference

Stochastic variational inference (SVI) employs stochastic optimization t...

Please sign up or login with your details

Forgot password? Click here to reset