A General Approach to Adding Differential Privacy to Iterative Training Procedures

12/15/2018
by   H. Brendan McMahan, et al.
0

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees. A key challenge is that training algorithms often require estimating many different quantities (vectors) from the same set of examples --- for example, gradients of different layers in a deep learning architecture, as well as metrics and batch normalization parameters. Each of these may have different properties like dimensionality, magnitude, and tolerance to noise. By extending previous work on the Moments Accountant for the subsampled Gaussian mechanism, we can provide privacy for such heterogeneous sets of vectors, while also structuring the approach to minimize software engineering challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2020

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

Deep learning models leak significant amounts of information about their...
research
09/25/2021

Opacus: User-Friendly Differential Privacy Library in PyTorch

We introduce Opacus, a free, open-source PyTorch library for training de...
research
11/09/2022

Directional Privacy for Deep Learning

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key met...
research
09/12/2023

Private Distribution Testing with Heterogeneous Constraints: Your Epsilon Might Not Be Mine

Private closeness testing asks to decide whether the underlying probabil...
research
02/23/2022

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...
research
01/30/2019

Constructing Independently Verifiable Privacy-Compliant Type Systems for Message Passing between Black-Box Components

Privacy by design (PbD) is the principle that privacy should be consider...
research
12/04/2020

ESCAPED: Efficient Secure and Private Dot Product Framework for Kernel-based Machine Learning Algorithms with Applications in Healthcare

To train sophisticated machine learning models one usually needs many tr...

Please sign up or login with your details

Forgot password? Click here to reset