Dynamic Differential-Privacy Preserving SGD

10/30/2021
by   Jian Du, et al.
0

Differentially-Private Stochastic Gradient Descent (DP-SGD) prevents training-data privacy breaches by adding noise to the clipped gradient during SGD training to satisfy the differential privacy (DP) definition. On the other hand, the same clipping operation and additive noise across training steps results in unstable updates and even a ramp-up period, which significantly reduces the model's accuracy. In this paper, we extend the Gaussian DP central limit theorem to calibrate the clipping value and the noise power for each individual step separately. We, therefore, are able to propose the dynamic DP-SGD, which has a lower privacy cost than the DP-SGD during updates until they achieve the same target privacy budget at a target number of updates. Dynamic DP-SGD, in particular, improves model accuracy without sacrificing privacy by gradually lowering both clipping value and noise power while adhering to a total privacy budget constraint. Extensive experiments on a variety of deep learning tasks, including image classification, natural language processing, and federated learning, show that the proposed dynamic DP-SGD algorithm stabilizes updates and, as a result, significantly improves model accuracy in the strong privacy protection region when compared to DP-SGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

Have it your way: Individualized Privacy Assignment for DP-SGD

When training a machine learning model with differential privacy, one se...
research
12/01/2021

Differentially Private SGD with Sparse Gradients

To protect sensitive training data, differentially private stochastic gr...
research
12/29/2021

DP-FP: Differentially Private Forward Propagation for Large Models

When applied to large-scale learning problems, the conventional wisdom o...
research
03/08/2023

Considerations on the Theory of Training Models with Differential Privacy

In federated learning collaborative learning takes place by a set of cli...
research
05/30/2019

P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification

Recently, deep convolutional neural networks (CNNs) have achieved great ...
research
10/16/2021

Noise-Augmented Privacy-Preserving Empirical Risk Minimization with Dual-purpose Regularizer and Privacy Budget Retrieval and Recycling

We propose Noise-Augmented Privacy-Preserving Empirical Risk Minimizatio...
research
10/07/2022

TAN without a burn: Scaling Laws of DP-SGD

Differentially Private methods for training Deep Neural Networks (DNNs) ...

Please sign up or login with your details

Forgot password? Click here to reset