A General Approach to Adding Differential Privacy to Iterative Training Procedures

12/15/2018
by   H. Brendan McMahan, et al.
Google
0

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees. A key challenge is that training algorithms often require estimating many different quantities (vectors) from the same set of examples --- for example, gradients of different layers in a deep learning architecture, as well as metrics and batch normalization parameters. Each of these may have different properties like dimensionality, magnitude, and tolerance to noise. By extending previous work on the Moments Accountant for the subsampled Gaussian mechanism, we can provide privacy for such heterogeneous sets of vectors, while also structuring the approach to minimize software engineering challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/22/2020

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

Deep learning models leak significant amounts of information about their...
09/25/2021

Opacus: User-Friendly Differential Privacy Library in PyTorch

We introduce Opacus, a free, open-source PyTorch library for training de...
11/09/2022

Directional Privacy for Deep Learning

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key met...
09/12/2023

Private Distribution Testing with Heterogeneous Constraints: Your Epsilon Might Not Be Mine

Private closeness testing asks to decide whether the underlying probabil...
02/23/2022

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...

Please sign up or login with your details

Forgot password? Click here to reset