Privacy Amplification via Iteration for Shuffled and Online PNSGD

06/20/2021
by   Matteo Sordello, et al.
0

In this paper, we consider the framework of privacy amplification via iteration, which is originally proposed by Feldman et al. and subsequently simplified by Asoodeh et al. in their analysis via the contraction coefficient. This line of work focuses on the study of the privacy guarantees obtained by the projected noisy stochastic gradient descent (PNSGD) algorithm with hidden intermediate updates. A limitation in the existing literature is that only the early stopped PNSGD has been studied, while no result has been proved on the more widely-used PNSGD applied on a shuffled dataset. Moreover, no scheme has been yet proposed regarding how to decrease the injected noise when new data are received in an online fashion. In this work, we first prove a privacy guarantee for shuffled PNSGD, which is investigated asymptotically when the noise is fixed for each sample size n but reduced at a predetermined rate when n increases, in order to achieve the convergence of privacy loss. We then analyze the online setting and provide a faster decaying scheme for the magnitude of the injected noise that also guarantees the convergence of privacy loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2020

Privacy Amplification of Iterative Algorithms via Contraction Coefficients

We investigate the framework of privacy amplification by iteration, rece...
research
05/02/2023

Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

Gradient clipping is a popular modification to standard (stochastic) gra...
research
08/20/2018

Privacy Amplification by Iteration

Many commonly used learning algorithms work by iteratively updating an i...
research
05/27/2022

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

A central issue in machine learning is how to train models on sensitive ...
research
04/11/2021

Learning from Censored and Dependent Data: The case of Linear Dynamics

Observations from dynamical systems often exhibit irregularities, such a...
research
12/20/2020

Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

We propose an information-theoretic technique for analyzing privacy guar...
research
05/17/2023

Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses

The Noisy-SGD algorithm is widely used for privately training machine le...

Please sign up or login with your details

Forgot password? Click here to reset