Federated Model Distillation with Noise-Free Differential Privacy

09/11/2020
by   Lichao Sun, et al.
5

Conventional federated learning directly averaging model weights is only possible if all local models have the same model structure. Naturally, it poses a restrictive constraint for collaboration between models with heterogeneous architectures. Sharing prediction instead of weight removes this obstacle and eliminates the risk of white-box inference attacks in conventional federated learning. However, the predictions from local models are sensitive and would leak private information to the public. Currently, there is no theoretic guarantee that sharing prediction is private and secure. To address this issue, one naive approach is adding the differentially private random noise to the predictions like previous privacy works related to federated learning. Although the privacy concern is mitigated with random noise perturbation, it brings a new problem with a substantial trade-off between privacy budget and model performance. In this paper, we fill in this gap by proposing a novel framework called FedMD-NFDP, which applies the new proposed Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework. NFDP can effectively protect the privacy of local data with the least sacrifice of the model utility. Our extensive experimental results on various datasets validate that FedMD-NFDP can deliver not only comparable utility, communication efficiency but also provide a noise-free differential privacy guarantee. We also demonstrate the feasibility of our FedMD-NFDP by considering both IID and non-IID setting, heterogeneous model architectures, and unlabelled public datasets from a different distribution.

READ FULL TEXT
research
05/01/2020

Exploring Private Federated Learning with Laplacian Smoothing

Federated learning aims to protect data privacy by collaboratively learn...
research
07/31/2020

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...
research
10/09/2020

Voting-based Approaches For Differentially Private Federated Learning

While federated learning (FL) enables distributed agents to collaborativ...
research
12/17/2019

Asynchronous Federated Learning with Differential Privacy for Edge Intelligence

Federated learning has been showing as a promising approach in paving th...
research
02/08/2022

Practical Challenges in Differentially-Private Federated Survival Analysis of Medical Data

Survival analysis or time-to-event analysis aims to model and predict th...
research
04/10/2020

Decentralized Differentially Private Segmentation with PATE

When it comes to preserving privacy in medical machine learning, two imp...

Please sign up or login with your details

Forgot password? Click here to reset