Probabilistic Federated Learning of Neural Networks Incorporated with Global Posterior Information

12/06/2020
by   Peng Xiao, et al.
0

In federated learning, models trained on local clients are distilled into a global model. Due to the permutation invariance arises in neural networks, it is necessary to match the hidden neurons first when executing federated learning with neural networks. Through the Bayesian nonparametric framework, Probabilistic Federated Neural Matching (PFNM) matches and fuses local neural networks so as to adapt to varying global model size and the heterogeneity of the data. In this paper, we propose a new method which extends the PFNM with a Kullback-Leibler (KL) divergence over neural components product, in order to make inference exploiting posterior information in both local and global levels. We also show theoretically that The additional part can be seamlessly concatenated into the match-and-fuse progress. Through a series of simulations, it indicates that our new method outperforms popular state-of-the-art federated learning methods in both single communication round and additional communication rounds situation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

Bayesian Nonparametric Federated Learning of Neural Networks

In federated learning problems, data is scattered across different serve...
research
11/15/2022

Bayesian Federated Neural Matching that Completes Full Information

Federated learning is a contemporary machine learning paradigm where loc...
research
10/24/2022

Investigating Neuron Disturbing in Fusing Heterogeneous Neural Networks

Fusing deep learning models trained on separately located clients into a...
research
07/13/2020

Model Fusion with Kullback–Leibler Divergence

We propose a method to fuse posterior distributions learned from heterog...
research
03/30/2020

Adaptive Personalized Federated Learning

Investigation of the degree of personalization in federated learning alg...
research
04/18/2022

FedKL: Tackling Data Heterogeneity in Federated Reinforcement Learning by Penalizing KL Divergence

As a distributed learning paradigm, Federated Learning (FL) faces the co...
research
12/15/2021

LoSAC: An Efficient Local Stochastic Average Control Method for Federated Optimization

Federated optimization (FedOpt), which targets at collaboratively traini...

Please sign up or login with your details

Forgot password? Click here to reset