Skellam Mixture Mechanism: a Novel Approach to Federated Learning with Differential Privacy

12/08/2022
by   Ergute Bao, et al.
0

Deep neural networks have strong capabilities of memorizing the underlying training data, which can be a serious privacy concern. An effective solution to this problem is to train models with differential privacy, which provides rigorous privacy guarantees by injecting random noise to the gradients. This paper focuses on the scenario where sensitive data are distributed among multiple participants, who jointly train a model through federated learning (FL), using both secure multiparty computation (MPC) to ensure the confidentiality of each gradient update, and differential privacy to avoid data leakage in the resulting model. A major challenge in this setting is that common mechanisms for enforcing DP in deep learning, which inject real-valued noise, are fundamentally incompatible with MPC, which exchanges finite-field integers among the participants. Consequently, most existing DP mechanisms require rather high noise levels, leading to poor model utility. Motivated by this, we propose Skellam mixture mechanism (SMM), an approach to enforce DP on models built via FL. Compared to existing methods, SMM eliminates the assumption that the input gradients must be integer-valued, and, thus, reduces the amount of noise injected to preserve DP. Further, SMM allows tight privacy accounting due to the nice composition and sub-sampling properties of the Skellam distribution, which are key to accurate deep learning with DP. The theoretical analysis of SMM is highly non-trivial, especially considering (i) the complicated math of differentially private deep learning in general and (ii) the fact that the mixture of two Skellam distributions is rather complex, and to our knowledge, has not been studied in the DP literature. Extensive experiments on various practical settings demonstrate that SMM consistently and significantly outperforms existing solutions in terms of the utility of the resulting model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2021

Differentially Private Federated Bayesian Optimization with Distributed Exploration

Bayesian optimization (BO) has recently been extended to the federated l...
research
09/26/2022

Taming Client Dropout for Distributed Differential Privacy in Federated Learning

Federated learning (FL) is increasingly deployed among multiple clients ...
research
08/25/2022

DPAUC: Differentially Private AUC Computation in Federated Learning

Federated learning (FL) has gained significant attention recently as a p...
research
02/19/2023

On the f-Differential Privacy Guarantees of Discrete-Valued Mechanisms

We consider a federated data analytics problem in which a server coordin...
research
03/09/2022

IncShrink: Architecting Efficient Outsourced Databases using Incremental MPC and Differential Privacy

In this paper, we consider secure outsourced growing databases that supp...
research
02/12/2022

Local Differential Privacy for Federated Learning in Industrial Settings

Federated learning (FL) is a collaborative learning approach that has ga...
research
05/01/2020

Secure Network Release with Link Privacy

Many data mining and analytical tasks rely on the abstraction of network...

Please sign up or login with your details

Forgot password? Click here to reset