An Expectation-Maximization Perspective on Federated Learning

11/19/2021
by   Christos Louizos, et al.
0

Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device. In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters. We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting. This perspective on FedAvg unifies several recent works in the field and opens up the possibility for extensions through different choices for the hierarchical model. Based on this view, we further propose a variant of the hierarchical model that employs prior distributions to promote sparsity. By similarly using the hard-EM algorithm for learning, we obtain FedSparse, a procedure that can learn sparse neural networks in the federated learning setting. FedSparse reduces communication costs from client to server and vice-versa, as well as the computational costs for inference with the sparsified network - both of which are of great practical importance in federated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2021

RingFed: Reducing Communication Costs in Federated Learning on Non-IID Data

Federated learning is a widely used distributed deep learning framework ...
research
11/08/2020

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

With more regulations tackling users' privacy-sensitive data protection ...
research
02/08/2023

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach

The canonical formulation of federated learning treats it as a distribut...
research
08/22/2023

EM for Mixture of Linear Regression with Clustered Data

Modern data-driven and distributed learning frameworks deal with diverse...
research
11/03/2021

Federated Expectation Maximization with heterogeneity mitigation and variance reduction

The Expectation Maximization (EM) algorithm is the default algorithm for...
research
01/24/2022

Decentralized EM to Learn Gaussian Mixtures from Datasets Distributed by Features

Expectation Maximization (EM) is the standard method to learn Gaussian m...

Please sign up or login with your details

Forgot password? Click here to reset