DeepAI
Log In Sign Up

GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model

08/29/2020
by   Seok-Ju Hahn, et al.
0

Federated learning platforms are gaining popularity. One of the major benefits is to mitigate the privacy risks as the learning of algorithms can be achieved without collecting or sharing data. While federated learning (i.e., many based on stochastic gradient algorithms) has shown great promise, there are still many challenging problems in protecting privacy, especially during the process of gradients update and exchange. This paper presents the first gradient-free federated learning framework called GRAFFL for learning a Bayesian generative model based on approximate Bayesian computation. Unlike conventional federated learning algorithms based on gradients, our framework does not require to disassemble a model (i.e., to linear components) or to perturb data (or encryption of data for aggregation) to preserve privacy. Instead, this framework uses implicit information derived from each participating institution to learn posterior distributions of parameters. The implicit information is summary statistics derived from SuffiAE that is a neural network developed in this study to create compressed and linearly separable representations thereby protecting sensitive information from leakage. As a sufficient dimensionality reduction technique, this is proved to provide sufficient summary statistics. We propose the GRAFFL-based Bayesian Gaussian mixture model to serve as a proof-of-concept of the framework. Using several datasets, we demonstrated the feasibility and usefulness of our model in terms of privacy protection and prediction performance (i.e., close to an ideal setting). The trained model as a quasi-global model can generate informative samples involving information from other institutions and enhances data analysis of each institution.

READ FULL TEXT
10/18/2019

Privacy-preserving Federated Bayesian Learning of a Generative Model for Imbalanced Classification of Clinical Data

In clinical research, the lack of events of interest often necessitates ...
03/24/2020

Learn to Forget: User-Level Memorization Elimination in Federated Learning

Federated learning is a decentralized machine learning technique that ev...
06/08/2022

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Federated learning has been proposed as a privacy-preserving machine lea...
05/02/2021

GRNN: Generative Regression Neural Network – A Data Leakage Attack for Federated Learning

Data privacy has become an increasingly important issue in machine learn...
03/11/2022

No free lunch theorem for security and utility in federated learning

In a federated learning scenario where multiple parties jointly learn a ...
01/27/2021

Accuracy and Privacy Evaluations of Collaborative Data Analysis

Distributed data analysis without revealing the individual data has rece...
04/04/2021

A Federated Learning Framework for Non-Intrusive Load Monitoring

Non-intrusive load monitoring (NILM) aims at decomposing the total readi...