On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication

02/26/2019
by   Sindri Magnússon, et al.
0

In parallel and distributed machine learning multiple nodes or processors coordinate to solve large problems. To do this, nodes need to compress important algorithm information to bits so it can be communicated. The goal of this paper is to explore how we can maintain the convergence of distributed algorithms under such compression. In particular, we consider a general class of linearly convergent parallel/distributed algorithms and illustrate how we can design quantizers compressing the communicated information to few bits while still preserving the linear convergence. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbors in a communication graph can communicate. We also numerically implement our results in distributed learning on smartphones using real-world data.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

03/10/2020

Communication-efficient Variance-reduced Stochastic Gradient Descent

We consider the problem of communication efficient distributed optimizat...
06/29/2018

Fundamental Limits of Distributed Data Shuffling

Data shuffling of training data among different computing nodes (workers...
08/04/2017

Efficient Variance-Reduced Learning for Fully Decentralized On-Device Intelligence

This work develops a fully decentralized variance-reduced learning algor...
07/18/2012

DAMS: Distributed Adaptive Metaheuristic Selection

We present a distributed generic algorithm called DAMS dedicated to adap...
05/14/2020

MixML: A Unified Analysis of Weakly Consistent Parallel Learning

Parallelism is a ubiquitous method for accelerating machine learning alg...
04/16/2012

Efficient Protocols for Distributed Classification and Optimization

In distributed learning, the goal is to perform a learning task over dat...
06/07/2022

Beyond spectral gap: The role of the topology in decentralized learning

In data-parallel optimization of machine learning models, workers collab...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.