DP-SGD Without Clipping: The Lipschitz Neural Network Way

05/25/2023
by   Louis Béthune, et al.
0

State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) faces difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, our theoretical analysis of Lipschitz constrained networks reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters we guarantee DP training of these networks. This analysis not only allows the computation of the aforementioned sensitivities at scale but also provides leads on to how maximize the gradient-to-noise ratio for fixed privacy guarantees. To facilitate the application of Lipschitz networks and foster robust and certifiable learning under privacy guarantees, we provide a Python package that implements building blocks allowing the construction and private training of such networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

Differentially Private Deep Learning with ModelMix

Training large neural networks with meaningful/usable differential priva...
research
02/05/2021

Fast and Memory Efficient Differentially Private-SGD via JL Projections

Differentially Private-SGD (DP-SGD) of Abadi et al. (2016) and its varia...
research
07/01/2023

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

Differentially private stochastic gradient descent (DP-SGD) is the canon...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...
research
09/30/2022

Differentially Private Optimization on Large Model at Small Cost

Differentially private (DP) optimization is the standard paradigm to lea...
research
03/16/2022

On the sensitivity of pose estimation neural networks: rotation parameterizations, Lipschitz constants, and provable bounds

In this paper, we approach the task of determining sensitivity bounds fo...
research
05/28/2018

Lipschitz regularity of deep neural networks: analysis and efficient estimation

Deep neural networks are notorious for being sensitive to small well-cho...

Please sign up or login with your details

Forgot password? Click here to reset