Achieving the fundamental convergence-communication tradeoff with Differentially Quantized Gradient Descent

02/06/2020
by   Chung-Yi Lin, et al.
9

The problem of reducing the communication cost in distributed training through gradient quantization is considered. For the class of smooth and strongly convex objective functions, we characterize the minimum achievable linear convergence rate for a given number of bits per problem dimension n. We propose Differentially Quantized Gradient Descent, a quantization algorithm with error compensation, and prove that it achieves the fundamental tradeoff between communication rate and convergence rate as n goes to infinity. In contrast, the naive quantizer that compresses the current gradient directly fails to achieve that optimal tradeoff. Experimental results on both simulated and real-world least-squares problems confirm our theoretical analysis.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/06/2019

A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent

This paper is concerned with minimizing the average of n cost functions ...
02/14/2021

Communication-Efficient Distributed Optimization with Quantized Preconditioners

We investigate fast and communication-efficient algorithms for the class...
11/18/2019

vqSGD: Vector Quantized Stochastic Gradient Descent

In this work, we present a family of vector quantization schemes vqSGD (...
10/11/2019

Theoretical Limits of Pipeline Parallel Optimization and Application to Distributed Deep Learning

We investigate the theoretical limits of pipeline parallel learning of d...
09/17/2019

Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients

The present paper develops a novel aggregated gradient approach for dist...
09/01/2021

A Gradient Sampling Algorithm for Stratified Maps with Applications to Topological Data Analysis

We introduce a novel gradient descent algorithm extending the well-known...
04/17/2017

Sparse Communication for Distributed Gradient Descent

We make distributed stochastic gradient descent faster by exchanging spa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.