Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning

06/07/2022
by   Seng Pei Liew, et al.
0

Recent studies of distributed computation with formal privacy guarantees, such as differentially private (DP) federated learning, leverage random sampling of clients in each round (privacy amplification by subsampling) to achieve satisfactory levels of privacy. Achieving this however requires strong assumptions which may not hold in practice, including precise and uniform subsampling of clients, and a highly trusted aggregator to process clients' data. In this paper, we explore a more practical protocol, shuffled check-in, to resolve the aforementioned issues. The protocol relies on client making independent and random decision to participate in the computation, freeing the requirement of server-initiated subsampling, and enabling robust modelling of client dropouts. Moreover, a weaker trust model known as the shuffle model is employed instead of using a trusted aggregator. To this end, we introduce new tools to characterize the Rényi differential privacy (RDP) of shuffled check-in. We show that our new techniques improve at least three times in privacy guarantee over those using approximate DP's strong composition at various parameter regimes. Furthermore, we provide a numerical approach to track the privacy of generic shuffled check-in mechanism including distributed stochastic gradient descent (SGD) with Gaussian mechanism. To the best of our knowledge, this is also the first evaluation of Gaussian mechanism within the local/shuffle model under the distributed setting in the literature, which can be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2020

Privacy Amplification via Random Check-Ins

Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fund...
research
05/27/2018

cpSGD: Communication-efficient and differentially-private distributed SGD

Distributed stochastic gradient descent is an important subroutine in di...
research
07/19/2021

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

We study privacy in a distributed learning framework, where clients coll...
research
03/08/2023

Considerations on the Theory of Training Models with Differential Privacy

In federated learning collaborative learning takes place by a set of cli...
research
06/17/2021

Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

Federated learning (FL) is a distributed learning paradigm in which many...
research
02/17/2021

Differential Private Hogwild! over Distributed Local Data Sets

We consider the Hogwild! setting where clients use local SGD iterations ...
research
12/26/2022

LOCKS: User Differentially Private and Federated Optimal Client Sampling

With changes in privacy laws, there is often a hard requirement for clie...

Please sign up or login with your details

Forgot password? Click here to reset