Robust Differentially Private Training of Deep Neural Networks

06/19/2020
by   Ali Davody, et al.
0

Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm which can mitigate privacy threats arising from the presence of sensitive information in training data. One major drawback of training deep neural networks with DPSGD is a reduction in the model's accuracy. In this paper, we propose an alternative method for preserving data privacy based on introducing noise through learnable probability distributions, which leads to a significant improvement in the utility of the resulting private models. We also demonstrate that normalization layers have a large beneficial impact on the performance of deep neural networks with noisy parameters. In particular, we show that contrary to general belief, a large amount of random noise can be added to the weights of neural networks without harming the performance, once the networks are augmented with normalization layers. We hypothesize that this robustness is a consequence of the scale invariance property of normalization operators. Building on these observations, we propose a new algorithmic technique for training deep neural networks under very low privacy budgets by sampling weights from Gaussian distributions and utilizing batch or layer normalization techniques to prevent performance degradation. Our method outperforms previous approaches, including DPSGD, by a substantial margin on a comprehensive set of experiments on Computer Vision and Natural Language Processing tasks. In particular, we obtain a 20 percent accuracy improvement over DPSGD on the MNIST and CIFAR10 datasets with DP-privacy budgets of ε = 0.05 and ε = 2.0, respectively. Our code is available online: https://github.com/uds-lsv/SIDP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2021

NeuralDP Differentially private neural networks by design

The application of differential privacy to the training of deep neural n...
research
07/21/2023

Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent

Each round in Differential Private Stochastic Gradient Descent (DPSGD) t...
research
04/26/2016

Scale Normalization

One of the difficulties of training deep neural networks is caused by im...
research
07/06/2022

Scaling Private Deep Learning with Low-Rank and Sparse Gradients

Applying Differentially Private Stochastic Gradient Descent (DPSGD) to t...
research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks

We explore the problem of selectively forgetting a particular set of dat...
research
06/09/2023

Differentially Private Sharpness-Aware Training

Training deep learning models with differential privacy (DP) results in ...
research
05/21/2022

Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy

Large convolutional neural networks (CNN) can be difficult to train in t...

Please sign up or login with your details

Forgot password? Click here to reset