Intrinisic Gradient Compression for Federated Learning

12/05/2021
by   Luke Melas-Kyriazi, et al.
0

Federated learning is a rapidly-growing area of research which enables a large number of clients to jointly train a machine learning model on privately-held data. One of the largest barriers to wider adoption of federated learning is the communication cost of sending model updates from and to the clients, which is accentuated by the fact that many of these devices are bandwidth-constrained. In this paper, we aim to address this issue by optimizing networks within a subspace of their full parameter space, an idea known as intrinsic dimension in the machine learning theory community. We use a correspondence between the notion of intrinsic dimension and gradient compressibility to derive a family of low-bandwidth optimization algorithms, which we call intrinsic gradient compression algorithms. Specifically, we present three algorithms in this family with different levels of upload and download bandwidth for use in various federated settings, along with theoretical guarantees on their performance. Finally, in large-scale federated learning experiments with models containing up to 100M parameters, we show that our algorithms perform extremely well compared to current state-of-the-art gradient compression methods.

READ FULL TEXT
research
05/05/2022

Communication-Efficient Adaptive Federated Learning

Federated learning is a machine learning training paradigm that enables ...
research
09/18/2019

Detailed comparison of communication efficiency of split learning and federated learning

We compare communication efficiencies of two compelling distributed mach...
research
01/28/2022

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

In classical federated learning, the clients contribute to the overall t...
research
02/01/2023

: Downlink Compression for Cross-Device Federated Learning

Many compression techniques have been proposed to reduce the communicati...
research
06/11/2021

Federated Learning with Spiking Neural Networks

As neural networks get widespread adoption in resource-constrained embed...
research
10/29/2020

Scalable Federated Learning over Passive Optical Networks

Two-step aggregation is introduced to facilitate scalable federated lear...
research
05/18/2021

DRIVE: One-bit Distributed Mean Estimation

We consider the problem where n clients transmit d-dimensional real-valu...

Please sign up or login with your details

Forgot password? Click here to reset