FedDistill: Making Bayesian Model Ensemble Applicable to Federated Learning

by   Hong-You Chen, et al.

Federated learning aims to leverage users' own data and computational resources in learning a strong global model, without directly accessing their data but only local models. It usually requires multiple rounds of communication, in which aggregating local models into a global model plays an important role. In this paper, we propose a novel aggregation scenario and algorithm named FedDistill, which enjoys the robustness of Bayesian model ensemble in aggregating users' predictions and employs knowledge distillation to summarize the ensemble predictions into a global model, with the help of unlabeled data collected at the server. Our empirical studies validate FedDistill's superior performance, especially when users' data are not i.i.d. and the neural networks go deeper. Moreover, FedDistill is compatible with recent efforts in regularizing users' model training, making it an easily applicable module: you only need to replace the aggregation method but leave other parts of your federated learning algorithms intact.


page 1

page 2

page 3

page 4


FedDKD: Federated Learning with Decentralized Knowledge Distillation

The performance of federated learning in neural networks is generally in...

Data-Free Knowledge Distillation for Heterogeneous Federated Learning

Federated Learning (FL) is a decentralized machine-learning paradigm, in...

Multi-Center Federated Learning

Federated learning has received great attention for its capability to tr...

FedX: Unsupervised Federated Learning with Cross Knowledge Distillation

This paper presents FedX, an unsupervised federated learning framework. ...

Multi-Level Branched Regularization for Federated Learning

A critical challenge of federated learning is data heterogeneity and imb...

Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning

In this paper we propose Fed-ensemble: a simple approach that bringsmode...

Towards Model Agnostic Federated Learning Using Knowledge Distillation

An often unquestioned assumption underlying most current federated learn...