Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

03/14/2018
by   Can Karakus, et al.
0

Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation. We propose a distributed optimization framework where the dataset is "encoded" to have an over-complete representation with built-in redundancy, and the straggling nodes in the system are dynamically left out of the computation at every iteration, whose loss is compensated by the embedded redundancy. We show that oblivious application of several popular optimization algorithms on encoded data, including gradient descent, L-BFGS, proximal gradient under data parallelism, and coordinate descent under model parallelism, converge to either approximate or exact solutions of the original problem when stragglers are treated as erasures. These convergence results are deterministic, i.e., they establish sample path convergence for arbitrary sequences of delay patterns or distributions on the nodes, and are independent of the tail behavior of the delay distribution. We demonstrate that equiangular tight frames have desirable properties as encoding matrices, and propose efficient mechanisms for encoding large-scale data. We implement the proposed technique on Amazon EC2 clusters, and demonstrate its performance over several learning problems, including matrix factorization, LASSO, ridge regression and logistic regression, and compare the proposed method with uncoded, asynchronous, and data replication strategies.

READ FULL TEXT
research
11/14/2017

Straggler Mitigation in Distributed Optimization Through Data Encoding

Slow running or straggler tasks can significantly reduce computation spe...
research
01/28/2019

ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding

We present ErasureHead, a new approach for distributed gradient descent ...
research
02/17/2022

Delay-adaptive step-sizes for asynchronous learning

In scalable machine learning systems, model training is often paralleliz...
research
03/31/2018

Fundamental Resource Trade-offs for Encoded Distributed Optimization

Dealing with the shear size and complexity of today's massive data sets ...
research
07/05/2019

Data Encoding for Byzantine-Resilient Distributed Optimization

We study distributed optimization in the presence of Byzantine adversari...
research
06/02/2020

Acceleration of Descent-based Optimization Algorithms via Carathéodory's Theorem

We propose a new technique to accelerate algorithms based on Gradient De...
research
05/16/2018

Towards Complex Artificial Life

An object-oriented combinator chemistry was used to construct an artific...

Please sign up or login with your details

Forgot password? Click here to reset