On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication

02/26/2019
by   Sindri Magnússon, et al.
0

In parallel and distributed machine learning multiple nodes or processors coordinate to solve large problems. To do this, nodes need to compress important algorithm information to bits so it can be communicated. The goal of this paper is to explore how we can maintain the convergence of distributed algorithms under such compression. In particular, we consider a general class of linearly convergent parallel/distributed algorithms and illustrate how we can design quantizers compressing the communicated information to few bits while still preserving the linear convergence. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbors in a communication graph can communicate. We also numerically implement our results in distributed learning on smartphones using real-world data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2020

Communication-efficient Variance-reduced Stochastic Gradient Descent

We consider the problem of communication efficient distributed optimizat...
research
06/29/2018

Fundamental Limits of Distributed Data Shuffling

Data shuffling of training data among different computing nodes (workers...
research
07/18/2012

DAMS: Distributed Adaptive Metaheuristic Selection

We present a distributed generic algorithm called DAMS dedicated to adap...
research
08/04/2017

Efficient Variance-Reduced Learning for Fully Decentralized On-Device Intelligence

This work develops a fully decentralized variance-reduced learning algor...
research
05/14/2020

MixML: A Unified Analysis of Weakly Consistent Parallel Learning

Parallelism is a ubiquitous method for accelerating machine learning alg...
research
04/16/2012

Efficient Protocols for Distributed Classification and Optimization

In distributed learning, the goal is to perform a learning task over dat...
research
03/16/2019

SLSGD: Secure and Efficient Distributed On-device Machine Learning

We consider distributed on-device learning with limited communication an...

Please sign up or login with your details

Forgot password? Click here to reset