Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

07/19/2021
by   Antonious M. Girgis, et al.
5

We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy. Motivated by stochastic optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round to participate in the learning process, which also enables privacy amplification. To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism and the server only receives a random permutation (shuffle) of the clients' responses without their association to each client. The principal result of this paper is a privacy-optimization performance trade-off for discrete randomization mechanisms in this sub-sampled shuffle privacy model. This is enabled through a new theoretical technique to analyze the Renyi Differential Privacy (RDP) of the sub-sampled shuffle model. We numerically demonstrate that, for important regimes, with composition our bound yields significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub-sampled shuffled models. We also demonstrate numerically significant improvement in privacy-learning performance operating point using real data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

On the Renyi Differential Privacy of the Shuffle Model

The central question studied in this paper is Renyi Differential Privacy...
research
05/03/2022

Privacy Amplification via Random Participation in Federated Learning

Running a randomized algorithm on a subsampled dataset instead of the en...
research
09/26/2022

Taming Client Dropout for Distributed Differential Privacy in Federated Learning

Federated learning (FL) is increasingly deployed among multiple clients ...
research
06/07/2022

Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning

Recent studies of distributed computation with formal privacy guarantees...
research
08/17/2020

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

We consider a distributed empirical risk minimization (ERM) optimization...
research
02/17/2021

Differential Private Hogwild! over Distributed Local Data Sets

We consider the Hogwild! setting where clients use local SGD iterations ...
research
07/11/2018

Differentially-Private "Draw and Discard" Machine Learning

In this work, we propose a novel framework for privacy-preserving client...

Please sign up or login with your details

Forgot password? Click here to reset