Understanding Gradient Clipping in Private SGD: A Geometric Perspective

06/27/2020
by   Xiangyi Chen, et al.
0

Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information. To provide formal and rigorous privacy guarantee, many learning systems now incorporate differential privacy by training their models with (differentially) private SGD. A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold. We first demonstrate how gradient clipping can prevent SGD from converging to stationary point. We then provide a theoretical analysis that fully quantifies the clipping bias on convergence with a disparity measure between the gradient distribution and a geometrically symmetric distribution. Our empirical evaluation further suggests that the gradient distributions along the trajectory of private SGD indeed exhibit symmetric structure that favors convergence. Together, our results provide an explanation why private SGD with gradient clipping remains effective in practice despite its potential clipping bias. Finally, we develop a new perturbation-based technique that can provably correct the clipping bias even for instances with highly asymmetric gradient distributions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2019

AdaCliP: Adaptive Clipping for Private SGD

Privacy preserving machine learning algorithms are crucial for learning ...
research
09/08/2018

Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

While machine learning has achieved remarkable results in a wide variety...
research
06/15/2016

Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics

While significant progress has been made separately on analytics systems...
research
08/23/2023

Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD

Differentially private SGD (DP-SGD) holds the promise of enabling the sa...
research
02/25/2021

Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning

The privacy leakage of the model about the training data can be bounded ...
research
06/17/2021

Large Scale Private Learning via Low-rank Reparametrization

We propose a reparametrization scheme to address the challenges of apply...
research
06/15/2020

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

The wide-spread availability of rich data has fueled the growth of machi...

Please sign up or login with your details

Forgot password? Click here to reset