DRIVE: One-bit Distributed Mean Estimation

05/18/2021
by   Shay Vargaftik, et al.
0

We consider the problem where n clients transmit d-dimensional real-valued vectors using d(1+o(1)) bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such compression problems naturally arise in distributed and federated learning. We provide novel mathematical results and derive computationally efficient algorithms that are more accurate than previous compression techniques. We evaluate our methods on a collection of distributed and federated learning tasks, using a variety of datasets, and show a consistent improvement over the state of the art.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/19/2021

Communication-Efficient Federated Learning via Robust Distributed Mean Estimation

Federated learning commonly relies on algorithms such as distributed (mi...
05/22/2022

Federated Learning Aggregation: New Robust Algorithms with Guarantees

Federated Learning has been recently proposed for distributed model trai...
10/07/2020

Optimal Gradient Compression for Distributed and Federated Learning

Communicating information, like gradient vectors, between computing node...
09/18/2019

Detailed comparison of communication efficiency of split learning and federated learning

We compare communication efficiencies of two compelling distributed mach...
11/24/2020

Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side Information

Communication efficient distributed mean estimation is an important prim...
12/05/2021

Intrinisic Gradient Compression for Federated Learning

Federated learning is a rapidly-growing area of research which enables a...
10/14/2021

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation

We study the problem of estimating at a central server the mean of a set...