Federated Learning with a Sampling Algorithm under Isoperimetry

by   Lukang Sun, et al.

Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices, who own the training data. These techniques critically rely on reducing the communication cost – the main bottleneck – between the devices and a central server. Federated learning algorithms usually take an optimization approach: they are algorithms for minimizing the training loss subject to communication (and other) constraints. In this work, we instead take a Bayesian approach for the training task, and propose a communication-efficient variant of the Langevin algorithm to sample a posteriori. The latter approach is more robust and provides more knowledge of the a posteriori distribution than its optimization counterpart. We analyze our algorithm without assuming that the target distribution is strongly log-concave. Instead, we assume the weaker log Sobolev inequality, which allows for nonconvexity.


page 1

page 2

page 3

page 4


One-Shot Federated Learning

We present one-shot federated learning, where a central server learns a ...

Model-Agnostic Round-Optimal Federated Learning via Knowledge Transfer

Federated learning enables multiple parties to collaboratively learn a m...

Towards Federated Learning: Robustness Analytics to Data Heterogeneity

Federated Learning allows remote centralized server training models with...

Scatterbrained: A flexible and expandable pattern for decentralized machine learning

Federated machine learning is a technique for training a model across mu...

Efficient and Convergent Federated Learning

Federated learning has shown its advances over the last few years but is...

Fed-Focal Loss for imbalanced data classification in Federated Learning

The Federated Learning setting has a central server coordinating the tra...

Fast-Convergent Federated Learning

Federated learning has emerged recently as a promising solution for dist...