Straggler-Resilient Distributed Machine Learning with Dynamic Backup Workers

02/11/2021
by   Guojun Xiong, et al.
0

With the increasing demand for large-scale training of machine learning models, consensus-based distributed optimization methods have recently been advocated as alternatives to the popular parameter server framework. In this paradigm, each worker maintains a local estimate of the optimal parameter vector, and iteratively updates it by waiting and averaging all estimates obtained from its neighbors, and then corrects it on the basis of its local dataset. However, the synchronization phase can be time consuming due to the need to wait for stragglers, i.e., slower workers. An efficient way to mitigate this effect is to let each worker wait only for updates from the fastest neighbors before updating its local parameter. The remaining neighbors are called backup workers. To minimize the globally training time over the network, we propose a fully distributed algorithm to dynamically determine the number of backup workers for each worker. We show that our algorithm achieves a linear speedup for convergence (i.e., convergence performance increases linearly with respect to the number of workers). We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2023

Straggler-Resilient Decentralized Learning via Adaptive Asynchronous Updates

With the increasing demand for large-scale training of machine learning ...
research
04/30/2020

Dynamic backup workers for parallel machine learning

The most popular framework for distributed training of machine learning ...
research
02/28/2020

Decentralized gradient methods: does topology matter?

Consensus-based distributed optimization methods have recently been advo...
research
07/07/2020

Divide-and-Shuffle Synchronization for Distributed Machine Learning

Distributed Machine Learning suffers from the bottleneck of synchronizat...
research
09/10/2023

Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data

This paper considers a type of incremental aggregated gradient (IAG) met...
research
08/24/2020

Adaptive Serverless Learning

With the emergence of distributed data, training machine learning models...
research
11/04/2021

Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping

Distributed learning has become an integral tool for scaling up machine ...

Please sign up or login with your details

Forgot password? Click here to reset