Exploring Private Federated Learning with Laplacian Smoothing

05/01/2020
by   Zhicong Liang, et al.
6

Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential privacy(DP) provides a statistical guarantee against such attacks, at a privacy of possibly degenerating the accuracy or utility of the trained models. In this paper, we apply a utility enhancement scheme based on Laplacian smoothing for differentially-private federated learning (DP-Fed-LS), where the parameter aggregation with injected Gaussian noise is improved in statistical precision. We provide tight closed-form privacy bounds for both uniform and Poisson subsampling and derive corresponding DP guarantees for differential private federated learning, with or without Laplacian smoothing. Experiments over MNIST, SVHN and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee under both subsampling mechanisms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2022

Differentially Private Federated Learning with Local Regularization and Sparsification

User-level differential privacy (DP) provides certifiable privacy guaran...
research
02/06/2023

One-shot Empirical Privacy Estimation for Federated Learning

Privacy auditing techniques for differentially private (DP) algorithms a...
research
07/26/2023

Flexible Differentially Private Vertical Federated Learning with Adaptive Feature Embeddings

The emergence of vertical federated learning (VFL) has stimulated concer...
research
09/11/2020

Federated Model Distillation with Noise-Free Differential Privacy

Conventional federated learning directly averaging model weights is only...
research
05/09/2022

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

This paper tackles the problem of ensuring training data privacy in a fe...
research
02/15/2023

Tight Auditing of Differentially Private Machine Learning

Auditing mechanisms for differential privacy use probabilistic means to ...
research
01/01/2021

Disclosure Risk from Homogeneity Attack in Differentially Private Frequency Distribution

Homogeneity attack allows adversaries to obtain the exact values on the ...

Please sign up or login with your details

Forgot password? Click here to reset