Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning

11/12/2019
by   Xinyan Dai, et al.
0

The high cost of communicating gradients is a major bottleneck for federated learning, as the bandwidth of the participating user devices is limited. Existing gradient compression algorithms are mainly designed for data centers with high-speed network and achieve O(√(d)log d) per-iteration communication cost at best, where d is the size of the model. We propose hyper-sphere quantization (HSQ), a general framework that can be configured to achieve a continuum of trade-offs between communication efficiency and gradient accuracy. In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of O(log d), which is favorable for federated learning. We prove the convergence of HSQ theoretically and show by experiments that HSQ significantly reduces the communication cost of model training without hurting convergence accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2021

Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

Communication of model updates between client nodes and the central aggr...
research
10/11/2021

ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training

Federated learning is a powerful distributed learning scheme that allows...
research
11/16/2021

Wyner-Ziv Gradient Compression for Federated Learning

Due to limited communication resources at the client and a massive numbe...
research
01/23/2020

RPN: A Residual Pooling Network for Efficient Federated Learning

Federated learning is a new machine learning framework which enables dif...
research
10/07/2020

Optimal Gradient Compression for Distributed and Federated Learning

Communicating information, like gradient vectors, between computing node...
research
03/13/2020

Communication Efficient Sparsification for Large Scale Machine Learning

The increasing scale of distributed learning problems necessitates the d...
research
05/06/2022

Online Model Compression for Federated Learning with Large Models

This paper addresses the challenges of training large neural network mod...

Please sign up or login with your details

Forgot password? Click here to reset