On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning

11/01/2022
by   Jimmy Lin, et al.
0

While differential privacy and gradient compression are separately well-researched topics in machine learning, the study of interaction between these two topics is still relatively new. We perform a detailed empirical study on how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning. The existing literature in gradient compression mostly evaluates compression in the absence of differential privacy guarantees, and demonstrate that sufficiently high compression rates reduce accuracy. Similarly, existing literature in differential privacy evaluates privacy mechanisms in the absence of compression, and demonstrates that sufficiently strong privacy guarantees reduce accuracy. In this work, we observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training. Specifically, we observe that when employing aggressive sparsification or rank reduction to the gradients, test accuracy is less affected by the Gaussian noise added for differential privacy. These observations are explained through an analysis how differential privacy and compression effects the bias and variance in estimating the average gradient. We follow this study with a recommendation on how to improve test accuracy under the context of differentially private deep learning and gradient compression. We evaluate this proposal and find that it can reduce the negative impact of noise added by differential privacy mechanisms on test accuracy by up to 24.6 negative impact of gradient sparsification on test accuracy by up to 15.1

READ FULL TEXT

page 19

page 21

page 26

page 27

page 30

page 33

page 35

page 36

research
03/08/2020

Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

When we enforce differential privacy in machine learning, the utility-pr...
research
11/03/2019

Privacy for Free: Communication-Efficient Learning with Differential Privacy Using Sketches

Communication and privacy are two critical concerns in distributed learn...
research
09/10/2020

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

Deployment of deep learning in different fields and industries is growin...
research
11/30/2020

Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

We use gradient sparsification to reduce the adverse effect of different...
research
06/17/2021

Large Scale Private Learning via Low-rank Reparametrization

We propose a reparametrization scheme to address the challenges of apply...
research
09/07/2020

Scaling up Differentially Private Deep Learning with Fast Per-Example Gradient Clipping

Recent work on Renyi Differential Privacy has shown the feasibility of a...
research
12/12/2019

Efficient Per-Example Gradient Computations in Convolutional Neural Networks

Deep learning frameworks leverage GPUs to perform massively-parallel com...

Please sign up or login with your details

Forgot password? Click here to reset