Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients

09/17/2019
by   Jun Sun, et al.
1

The present paper develops a novel aggregated gradient approach for distributed machine learning that adaptively compresses the gradient communication. The key idea is to first quantize the computed gradients, and then skip less informative quantized gradient communications by reusing outdated gradients. Quantizing and skipping result in `lazy' worker-server communications, which justifies the term Lazily Aggregated Quantized gradient that is henceforth abbreviated as LAQ. Our LAQ can provably attain the same linear convergence rate as the gradient descent in the strongly convex case, while effecting major savings in the communication overhead both in transmitted bits as well as in communication rounds. Empirically, experiments with real data corroborate a significant communication reduction compared to existing gradient- and stochastic gradient-based algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2018

LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning

This paper presents a new class of gradient methods for distributed mach...
research
02/26/2020

LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient Distributed Learning

This paper targets solving distributed machine learning problems such as...
research
02/05/2022

Distributed Learning With Sparsified Gradient Differences

A very large number of communications are typically required to solve di...
research
10/09/2019

High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

Edge machine learning involves the deployment of learning algorithms at ...
research
11/18/2019

vqSGD: Vector Quantized Stochastic Gradient Descent

In this work, we present a family of vector quantization schemes vqSGD (...
research
09/24/2022

Communication-Efficient Federated Learning Using Censored Heavy Ball Descent

Distributed machine learning enables scalability and computational offlo...
research
07/13/2023

Online Distributed Learning with Quantized Finite-Time Coordination

In this paper we consider online distributed learning problems. Online d...

Please sign up or login with your details

Forgot password? Click here to reset