Privacy Amplification by Iteration

08/20/2018
by   Vitaly Feldman, et al.
0

Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2021

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

We model the dynamics of privacy loss in Langevin diffusion and extend i...
research
06/25/2020

Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent

Private machine learning involves addition of noise while training, resu...
research
06/09/2022

Analytical Composition of Differential Privacy via the Edgeworth Accountant

Many modern machine learning algorithms are composed of simple private a...
research
02/13/2016

Convex Optimization for Linear Query Processing under Approximate Differential Privacy

Differential privacy enables organizations to collect accurate aggregate...
research
08/06/2018

Differential Private Stream Processing of Energy Consumption

A number of applications benefit from continuously releasing streams of ...
research
06/20/2021

Privacy Amplification via Iteration for Shuffled and Online PNSGD

In this paper, we consider the framework of privacy amplification via it...
research
06/17/2021

Shuffle Private Stochastic Convex Optimization

In shuffle privacy, each user sends a collection of randomized messages ...

Please sign up or login with your details

Forgot password? Click here to reset