Backpropagation Clipping for Deep Learning with Differential Privacy

02/10/2022
by   Timothy Stevens, et al.
0

We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning. Our approach clips each trainable layer's inputs (during the forward pass) and its upstream gradients (during the backward pass) to ensure bounded global sensitivity for the layer's gradient; this combination replaces the gradient clipping step in existing DP-SGD variants. Our approach is simple to implement in existing deep learning frameworks. The results of our empirical evaluation demonstrate that backpropagation clipping provides higher accuracy at lower values for the privacy parameter ϵ compared to previous work. We achieve 98.7 CIFAR-10 with ϵ = 3.64.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset