Learning rate adaptation for differentially private stochastic gradient descent

09/11/2018
by   Antti Koskela, et al.
0

Differentially private learning has recently emerged as the leading approach for privacy-preserving machine learning. Differential privacy can complicate learning procedures because each access to the data needs to be carefully designed and carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. In this paper, we propose a differentially private algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation in classical numerical analysis: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. This allows us to compute tight privacy bounds. Empirically we show that our method is competitive with manually tuned commonly used optimisation methods for training deep neural networks and differentially private variational inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2019

Differentially Private Learning with Adaptive Clipping

We introduce a new adaptive clipping technique for training learning mod...
research
10/08/2020

Differentially Private Deep Learning with Direct Feedback Alignment

Standard methods for differentially private training of deep neural netw...
research
09/08/2018

Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

While machine learning has achieved remarkable results in a wide variety...
research
06/15/2016

Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics

While significant progress has been made separately on analytics systems...
research
05/24/2023

Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models

Large language models (LLMs) are excellent in-context learners. However,...
research
10/18/2020

Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization

A common pain point in differentially private machine learning is the si...
research
12/15/2021

One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks

Preserving privacy in training modern NLP models comes at a cost. We kno...

Please sign up or login with your details

Forgot password? Click here to reset