Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?

02/06/2022
by   Sadaf Salehkalaibar, et al.
0

In federated learning (FL), a global model is trained at a Parameter Server (PS) by aggregating model updates obtained from multiple remote learners. Critically, the communication between the remote users and the PS is limited by the available power for transmission, while the transmission from the PS to the remote users can be considered unbounded. This gives rise to the distributed learning scenario in which the updates from the remote learners have to be compressed so as to meet communication rate constraints in the uplink transmission toward the PS. For this problem, one would like to compress the model updates so as to minimize the resulting loss in accuracy. In this paper, we take a rate-distortion approach to answer this question for the distributed training of a deep neural network (DNN). In particular, we define a measure of the compression performance, the per-bit accuracy, which addresses the ultimate model accuracy that a bit of communication brings to the centralized model. In order to maximize the per-bit accuracy, we consider modeling the gradient updates at remote learners as a generalized normal distribution. Under this assumption on the model update distribution, we propose a class of distortion measures for the design of quantizer for the compression of the model updates. We argue that this family of distortion measures, which we refer to as "M-magnitude weighted L_2" norm, capture the practitioner intuition in the choice of gradient compressor. Numerical simulations are provided to validate the proposed approach.

READ FULL TEXT
research
01/23/2023

M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion

In federated learning (FL), the communication constraint between the rem...
research
11/15/2021

DNN gradient lossless compression: Can GenNorm be the answer?

In this paper, the problem of optimal gradient lossless compression in D...
research
08/22/2019

An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning

Federated learning is a distributed learning method to train a shared mo...
research
04/18/2022

How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

In this paper, we introduce 𝖢𝖮_3, an algorithm for communication-efficie...
research
05/10/2021

Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates

Federated Learning (FL) is an emerging decentralized learning framework ...
research
06/05/2020

UVeQFed: Universal Vector Quantization for Federated Learning

Traditional deep learning models are trained at a centralized server usi...

Please sign up or login with your details

Forgot password? Click here to reset