Quantifying identifiability to choose and audit ε in differentially private deep learning

03/04/2021
by   Daniel Bernau, et al.
0

Differential privacy allows bounding the influence that training data records have on a machine learning model. To use differential privacy in machine learning, data scientists must choose privacy parameters (ϵ,δ). Choosing meaningful privacy parameters is key since models trained with weak privacy parameters might result in excessive privacy leakage, while strong privacy parameters might overly degrade model utility. However, privacy parameter values are difficult to choose for two main reasons. First, the upper bound on privacy loss (ϵ,δ) might be loose, depending on the chosen sensitivity and data distribution of practical datasets. Second, legal requirements and societal norms for anonymization often refer to individual identifiability, to which (ϵ,δ) are only indirectly related. We transform (ϵ,δ) to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset. The bound holds for multidimensional queries under composition, and we show that it can be tight in practice. Furthermore, we derive an identifiability bound, which relates the adversary assumed in differential privacy to previous work on membership inference adversaries. We formulate an implementation of this differential privacy adversary that allows data scientists to audit model training and compute empirical identifiability scores and empirical (ϵ,δ).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2019

Assessing differentially private deep learning with Membership Inference

Releasing data in the form of trained neural networks with differential ...
research
07/31/2018

Subsampled Rényi Differential Privacy and Analytical Moments Accountant

We study the problem of subsampling in differential privacy (DP), a ques...
research
02/24/2019

When Relaxations Go Bad: "Differentially-Private" Machine Learning

Differential privacy is becoming a standard notion for performing privac...
research
02/15/2023

Tight Auditing of Differentially Private Machine Learning

Auditing mechanisms for differential privacy use probabilistic means to ...
research
05/31/2023

A Note On Interpreting Canary Exposure

Canary exposure, introduced in Carlini et al. is frequently used to empi...
research
04/14/2023

Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice

Behavioral data generated by users' devices, ranging from emoji use to p...
research
01/13/2022

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...

Please sign up or login with your details

Forgot password? Click here to reset