Privacy Amplification via Random Participation in Federated Learning

05/03/2022
by   Burak Hasircioglu, et al.
3

Running a randomized algorithm on a subsampled dataset instead of the entire dataset amplifies differential privacy guarantees. In this work, in a federated setting, we consider random participation of the clients in addition to subsampling their local datasets. Since such random participation of the clients creates correlation among the samples of the same client in their subsampling, we analyze the corresponding privacy amplification via non-uniform subsampling. We show that when the size of the local datasets is small, the privacy guarantees via random participation is close to those of the centralized setting, in which the entire dataset is located in a single host and subsampled. On the other hand, when the local datasets are large, observing the output of the algorithm may disclose the identities of the sampled clients with high confidence. Our analysis reveals that, even in this case, privacy guarantees via random participation outperform those via only local subsampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2021

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

We study privacy in a distributed learning framework, where clients coll...
research
06/25/2023

Private Aggregation in Wireless Federated Learning with Heterogeneous Clusters

Federated learning collaboratively trains a neural network on privately ...
research
12/20/2017

Differentially Private Federated Learning: A Client Level Perspective

Federated learning is a recent advance in privacy protection. In this co...
research
10/14/2022

Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis

Federated Learning (FL) is a scheme for collaboratively training Deep Ne...
research
06/13/2023

Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings

In several practical applications of federated learning (FL), the client...
research
06/13/2022

Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process

In typical scenarios where the Federated Learning (FL) framework applies...
research
06/20/2022

Walking to Hide: Privacy Amplification via Random Message Exchanges in Network

The *shuffle model* is a powerful tool to amplify the privacy guarantees...

Please sign up or login with your details

Forgot password? Click here to reset