Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

10/12/2021
by   Friedrich Dörmann, et al.
0

Learning often involves sensitive data and as such, privacy preserving extensions to Stochastic Gradient Descent (SGD) and other machine learning algorithms have been developed using the definitions of Differential Privacy (DP). In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise. Firstly, inherent sampling noise arising from the use of minibatches. Secondly, additive Gaussian noise from the underlying mechanisms that introduce privacy. In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks, however they are not accounted for equally in the privacy budget. Given this observation, we propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise, such that more of the overall noise can be accounted for in the privacy budget. With this paradigm, we are able to improve on the state-of-the-art in the privacy/utility tradeoff of private end-to-end CNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

We use gradient sparsification to reduce the adverse effect of different...
research
06/08/2023

Differentially Private Image Classification by Learning Priors from Random Processes

In privacy-preserving machine learning, differentially private stochasti...
research
06/24/2019

The Value of Collaboration in Convex Machine Learning with Differential Privacy

In this paper, we apply machine learning to distributed private data own...
research
09/30/2018

Privacy-preserving Stochastic Gradual Learning

It is challenging for stochastic optimizations to handle large-scale sen...
research
11/09/2022

Directional Privacy for Deep Learning

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key met...
research
02/28/2023

Arbitrary Decisions are a Hidden Cost of Differentially-Private Training

Mechanisms used in privacy-preserving machine learning often aim to guar...
research
03/03/2023

Exploring Machine Learning Privacy/Utility trade-off from a hyperparameters Lens

Machine Learning (ML) architectures have been applied to several applica...

Please sign up or login with your details

Forgot password? Click here to reset