Asynchronous Distributed Learning with Sparse Communications and Identification

12/10/2018
by   Dmitry Grishchenko, et al.
0

In this paper, we present an asynchronous optimization algorithm for distributed learning, that efficiently reduces the communications between a master and working machines by randomly sparsifying the local updates. This sparsification allows to lift the communication bottleneck often present in distributed learning setups where computations are performed by workers on local data while a master machine coordinates their updates to optimize a global loss. We prove that despite its sparse asynchronous communications, our algorithm allows for a fixed stepsize and benefits from a linear convergence rate in the strongly convex case. Moreover, for ℓ_1-regularized problems, this algorithm identifies near-optimal sparsity patterns, so that all communications eventually become sparse. We furthermore leverage on this identification to improve our sparsification technique. We illustrate on real and synthetic data that this algorithm converges faster in terms of data exchanges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2020

Anytime Minibatch with Delayed Gradients

Distributed optimization is widely deployed in practice to solve a broad...
research
03/16/2019

A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems

This paper proposes and analyzes a communication-efficient distributed o...
research
08/20/2021

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method

This work proposes a distributed algorithm for solving empirical risk mi...
research
02/24/2018

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

Many machine learning models, including those with non-smooth regularize...
research
05/25/2016

Efficient Distributed Learning with Sparsity

We propose a novel, efficient approach for distributed sparse learning i...
research
07/19/2022

Couplage Global-Local en asynchrone pour des problèmes linéaires

An asynchronous parallel version of the non-intrusive global-local coupl...
research
05/16/2021

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

To address the communication bottleneck problem in distributed optimizat...

Please sign up or login with your details

Forgot password? Click here to reset