Anytime Minibatch with Delayed Gradients

12/15/2020
by   Haider Al-Lawati, et al.
0

Distributed optimization is widely deployed in practice to solve a broad range of problems. In a typical asynchronous scheme, workers calculate gradients with respect to out-of-date optimization parameters while the master uses stale (i.e., delayed) gradients to update the parameters. While using stale gradients can slow the convergence, asynchronous methods speed up the overall optimization with respect to wall clock time by allowing more frequent updates and reducing idling times. In this paper, we present a variable per-epoch minibatch scheme called Anytime Minibatch with Delayed Gradients (AMB-DG). In AMB-DG, workers compute gradients in epochs of a fixed time while the master uses stale gradients to update the optimization parameters. We analyze AMB-DG in terms of its regret bound and convergence rate. We prove that for convex smooth objective functions, AMB-DG achieves the optimal regret bound and convergence rate. We compare the performance of AMB-DG with that of Anytime Minibatch (AMB) which is similar to AMB-DG but does not use stale gradients. In AMB, workers stay idle after each gradient transmission to the master until they receive the updated parameters from the master while in AMB-DG workers never idle. We also extend AMB-DG to the fully distributed setting. We compare AMB-DG with AMB when the communication delay is long and observe that AMB-DG converges faster than AMB in wall clock time. We also compare the performance of AMB-DG with the state-of-the-art fixed minibatch approach that uses delayed gradients. We run our experiments on a real distributed system and observe that AMB-DG converges more than two times.

READ FULL TEXT
research
01/21/2023

ABS: Adaptive Bounded Staleness Converges Faster and Communicates Less

Wall-clock convergence time and communication rounds are critical perfor...
research
12/10/2018

Asynchronous Distributed Learning with Sparse Communications and Identification

In this paper, we present an asynchronous optimization algorithm for dis...
research
06/10/2020

Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization

Distributed optimization is vital in solving large-scale machine learnin...
research
02/25/2020

Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers

We consider the setting where a master wants to run a distributed stocha...
research
01/30/2023

Delayed Stochastic Algorithms for Distributed Weakly Convex Optimization

This paper studies delayed stochastic algorithms for weakly convex optim...
research
06/25/2021

Hierarchical Online Convex Optimization

We consider online convex optimization (OCO) over a heterogeneous networ...
research
10/06/2017

Accumulated Gradient Normalization

This work addresses the instability in asynchronous data parallel optimi...

Please sign up or login with your details

Forgot password? Click here to reset