DRIVE: One-bit Distributed Mean Estimation

by   Shay Vargaftik, et al.

We consider the problem where n clients transmit d-dimensional real-valued vectors using d(1+o(1)) bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such compression problems naturally arise in distributed and federated learning. We provide novel mathematical results and derive computationally efficient algorithms that are more accurate than previous compression techniques. We evaluate our methods on a collection of distributed and federated learning tasks, using a variety of datasets, and show a consistent improvement over the state of the art.


page 1

page 2

page 3

page 4


Communication-Efficient Federated Learning via Robust Distributed Mean Estimation

Federated learning commonly relies on algorithms such as distributed (mi...

Federated Learning Aggregation: New Robust Algorithms with Guarantees

Federated Learning has been recently proposed for distributed model trai...

Optimal Gradient Compression for Distributed and Federated Learning

Communicating information, like gradient vectors, between computing node...

Detailed comparison of communication efficiency of split learning and federated learning

We compare communication efficiencies of two compelling distributed mach...

Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side Information

Communication efficient distributed mean estimation is an important prim...

Intrinisic Gradient Compression for Federated Learning

Federated learning is a rapidly-growing area of research which enables a...

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation

We study the problem of estimating at a central server the mean of a set...