Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)

03/10/2022
by   Jiayuan Ye, et al.
2

Differential privacy analysis of randomized learning algorithms typically relies on composition theorems, where the implicit assumption is that the internal state of the iterative algorithm is revealed to the adversary. However, by assuming hidden states for DP algorithms (when only the last-iterate is observable), recent works prove a converging privacy bound for noisy gradient descent (on strongly convex smooth loss function) that is significantly smaller than composition bounds after O(1/step-size) epochs. In this paper, we extend this hidden-state analysis to the noisy mini-batch stochastic gradient descent algorithms on strongly-convex smooth loss functions. We prove converging Rényi DP bounds under various mini-batch sampling schemes, such as "shuffle and partition" (which are used in practical implementations of DP-SGD) and "sampling without replacement". We prove that, in these settings, our privacy bound is much smaller than the composition bound for training with a large number of iterations (which is the case for learning from high-dimensional data). Our converging privacy analysis, thus, shows that differentially private learning, with a tight bound, needs hidden state privacy analysis or a fast convergence. To complement our theoretical results, we run experiment on training classification models on MNIST, FMNIST and CIFAR-10 datasets, and observe a better accuracy given fixed privacy budgets, under the hidden-state analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2021

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

We model the dynamics of privacy loss in Langevin diffusion and extend i...
research
05/22/2023

Faster Differentially Private Convex Optimization via Second-Order Methods

Differentially private (stochastic) gradient descent is the workhorse of...
research
03/02/2021

Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep Learning

Differential Privacy (DP) is the leading approach to privacy preserving ...
research
05/17/2023

Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses

The Noisy-SGD algorithm is widely used for privately training machine le...
research
01/28/2022

Differential Privacy Guarantees for Stochastic Gradient Langevin Dynamics

We analyse the privacy leakage of noisy stochastic gradient descent by m...
research
04/07/2021

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

In this work, we conduct the first systematic study of stochastic variat...
research
01/18/2018

Faster Algorithms for Large-scale Machine Learning using Simple Sampling Techniques

Now a days, the major challenge in machine learning is the `Big Data' ch...

Please sign up or login with your details

Forgot password? Click here to reset