On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication

by   Sindri Magnússon, et al.

In parallel and distributed machine learning multiple nodes or processors coordinate to solve large problems. To do this, nodes need to compress important algorithm information to bits so it can be communicated. The goal of this paper is to explore how we can maintain the convergence of distributed algorithms under such compression. In particular, we consider a general class of linearly convergent parallel/distributed algorithms and illustrate how we can design quantizers compressing the communicated information to few bits while still preserving the linear convergence. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbors in a communication graph can communicate. We also numerically implement our results in distributed learning on smartphones using real-world data.



page 1

page 2

page 3

page 4


Communication-efficient Variance-reduced Stochastic Gradient Descent

We consider the problem of communication efficient distributed optimizat...

Fundamental Limits of Distributed Data Shuffling

Data shuffling of training data among different computing nodes (workers...

Efficient Variance-Reduced Learning for Fully Decentralized On-Device Intelligence

This work develops a fully decentralized variance-reduced learning algor...

DAMS: Distributed Adaptive Metaheuristic Selection

We present a distributed generic algorithm called DAMS dedicated to adap...

MixML: A Unified Analysis of Weakly Consistent Parallel Learning

Parallelism is a ubiquitous method for accelerating machine learning alg...

Efficient Protocols for Distributed Classification and Optimization

In distributed learning, the goal is to perform a learning task over dat...

Beyond spectral gap: The role of the topology in decentralized learning

In data-parallel optimization of machine learning models, workers collab...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.