Privacy Auditing with One (1) Training Run

05/15/2023
by   Thomas Steinke, et al.
0

We propose a scheme for auditing differentially private machine learning systems with a single training run. This exploits the parallelism of being able to add or remove multiple training examples independently. We analyze this using the connection between differential privacy and statistical generalization, which avoids the cost of group privacy. Our auditing scheme requires minimal assumptions about the algorithm and can be applied in the black-box or white-box setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

Model Explanations with Differential Privacy

Black-box machine learning models are used in critical decision-making d...
research
10/07/2021

Hyperparameter Tuning with Renyi Differential Privacy

For many differentially private algorithms, such as the prominent noisy ...
research
04/14/2023

Separating Key Agreement and Computational Differential Privacy

Two party differential privacy allows two parties who do not trust each ...
research
08/22/2020

On the Intrinsic Differential Privacy of Bagging

Differentially private machine learning trains models while protecting p...
research
03/16/2018

Differential Privacy for Growing Databases

We study the design of differentially private algorithms for adaptive an...
research
12/09/2022

Lower Bounds for Rényi Differential Privacy in a Black-Box Setting

We present new methods for assessing the privacy guarantees of an algori...
research
08/20/2018

An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices

Statistical agencies face a dual mandate to publish accurate statistics ...

Please sign up or login with your details

Forgot password? Click here to reset