Hyperparameter Tuning with Renyi Differential Privacy

10/07/2021
by   Nicolas Papernot, et al.
0

For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fine tune the value of the training algorithm's hyperparameters. In this work, we first illustrate how simply setting hyperparameters based on non-private training runs can leak private information. Motivated by this observation, we then provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy. Our results improve and extend the work of Liu and Talwar (STOC 2019). Our analysis supports our previous observation that tuning hyperparameters does indeed leak private information, but we prove that, under certain assumptions, this leakage is modest, as long as each candidate training run needed to select hyperparameters is itself differentially private.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2022

Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search

Models need to be trained with privacy-preserving learning algorithms to...
research
01/27/2023

Practical Differentially Private Hyperparameter Tuning with Subsampling

Tuning all the hyperparameters of differentially private (DP) machine le...
research
08/09/2021

Efficient Hyperparameter Optimization for Differentially Private Deep Learning

Tuning the hyperparameters in the differentially private stochastic grad...
research
05/15/2023

Privacy Auditing with One (1) Training Run

We propose a scheme for auditing differentially private machine learning...
research
06/09/2023

DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework

Hyperparameter optimization, also known as hyperparameter tuning, is a w...
research
12/23/2020

Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Th...
research
05/09/2019

Differentially Private Learning with Adaptive Clipping

We introduce a new adaptive clipping technique for training learning mod...

Please sign up or login with your details

Forgot password? Click here to reset