Distributed stochastic inertial methods with delayed derivatives

07/24/2021
by   Yangyang Xu, et al.
0

Stochastic gradient methods (SGMs) are predominant approaches for solving stochastic optimization. On smooth nonconvex problems, a few acceleration techniques have been applied to improve the convergence rate of SGMs. However, little exploration has been made on applying a certain acceleration technique to a stochastic subgradient method (SsGM) for nonsmooth nonconvex problems. In addition, few efforts have been made to analyze an (accelerated) SsGM with delayed derivatives. The information delay naturally happens in a distributed system, where computing workers do not coordinate with each other. In this paper, we propose an inertial proximal SsGM for solving nonsmooth nonconvex stochastic optimization problems. The proposed method can have guaranteed convergence even with delayed derivative information in a distributed environment. Convergence rate results are established to three classes of nonconvex problems: weakly-convex nonsmooth problems with a convex regularizer, composite nonconvex problems with a nonsmooth convex regularizer, and smooth nonconvex problems. For each problem class, the convergence rate is O(1/K^1/2) in the expected value of the gradient norm square, for K iterations. In a distributed environment, the convergence rate of the proposed method will be slowed down by the information delay. Nevertheless, the slow-down effect will decay with the number of iterations for the latter two problem classes. We test the proposed method on three applications. The numerical results clearly demonstrate the advantages of using the inertial-based acceleration. Furthermore, we observe higher parallelization speed-up in asynchronous updates over the synchronous counterpart, though the former uses delayed derivatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2018

Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization

We study stochastic algorithms for solving non-convex optimization probl...
research
02/21/2020

Asynchronous parallel adaptive stochastic gradient methods

Stochastic gradient methods (SGMs) are the predominant approaches to tra...
research
01/30/2023

Delayed Stochastic Algorithms for Distributed Weakly Convex Optimization

This paper studies delayed stochastic algorithms for weakly convex optim...
research
07/14/2018

On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches

This paper proposes a framework of L-BFGS based on the (approximate) sec...
research
08/30/2023

A Unified Analysis for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions

In this paper we propose a proximal subgradient method (Prox-SubGrad) fo...
research
02/05/2019

Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems

Nonconvex and nonsmooth problems have recently attracted considerable at...
research
10/23/2019

Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

This paper proposes a novel family of primal-dual-based distributed algo...

Please sign up or login with your details

Forgot password? Click here to reset