Have it your way: Individualized Privacy Assignment for DP-SGD

03/29/2023
by   Franziska Boenisch, et al.
0

When training a machine learning model with differential privacy, one sets a privacy budget. This budget represents a maximal privacy violation that any user is willing to face by contributing their data to the training set. We argue that this approach is limited because different users may have different privacy expectations. Thus, setting a uniform privacy budget across all points may be overly conservative for some users or, conversely, not sufficiently protective for others. In this paper, we capture these preferences through individualized privacy budgets. To demonstrate their practicality, we introduce a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which supports such individualized budgets. DP-SGD is the canonical approach to training models with differential privacy. We modify its data sampling and gradient noising mechanisms to arrive at our approach, which we call Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees tailored to the preferences of individual users and their data points, we find it empirically improves privacy-utility trade-offs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Personalized DP-SGD using Sampling Mechanisms

Personalized privacy becomes critical in deep learning for Trustworthy A...
research
10/30/2021

Dynamic Differential-Privacy Preserving SGD

Differentially-Private Stochastic Gradient Descent (DP-SGD) prevents tra...
research
10/07/2022

Differentially Private Deep Learning with ModelMix

Training large neural networks with meaningful/usable differential priva...
research
06/22/2021

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

Recent advances in differentially private deep learning have demonstrate...
research
10/07/2021

Complex-valued deep learning with differential privacy

We present ζ-DP, an extension of differential privacy (DP) to complex-va...
research
10/07/2022

TAN without a burn: Scaling Laws of DP-SGD

Differentially Private methods for training Deep Neural Networks (DNNs) ...
research
10/05/2021

Label differential privacy via clustering

We present new mechanisms for label differential privacy, a relaxation o...

Please sign up or login with your details

Forgot password? Click here to reset