Epsilon*: Privacy Metric for Machine Learning Models

07/21/2023
by   Diana M. Negoescu, et al.
0

We introduce Epsilon*, a new privacy metric for measuring the privacy risk of a single model instance prior to, during, or after deployment of privacy mitigation strategies. The metric does not require access to the training data sampling or model training algorithm. Epsilon* is a function of true positive and false positive rates in a hypothesis test used by an adversary in a membership inference attack. We distinguish between quantifying the privacy loss of a trained model instance and quantifying the privacy loss of the training mechanism which produces this model instance. Existing approaches in the privacy auditing literature provide lower bounds for the latter, while our metric provides a lower bound for the former by relying on an (ϵ,δ)-type of quantification of the privacy of the trained model instance. We establish a relationship between these lower bounds and show how to implement Epsilon* to avoid numerical and noise amplification instability. We further show in experiments on benchmark public data sets that Epsilon* is sensitive to privacy risk mitigation by training with differential privacy (DP), where the value of Epsilon* is reduced by up to 800 the Epsilon* values of non-DP trained baseline models. This metric allows privacy auditors to be independent of model owners, and enables all decision-makers to visualize the privacy-utility landscape to make informed decisions regarding the trade-offs between model privacy and utility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...
research
03/29/2023

Non-Asymptotic Lower Bounds For Training Data Reconstruction

We investigate semantic guarantees of private learning algorithms for th...
research
12/25/2017

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...
research
06/27/2023

Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile

Differential privacy (DP) is the prevailing technique for protecting use...
research
02/24/2022

Bounding Membership Inference

Differential Privacy (DP) is the de facto standard for reasoning about t...
research
07/14/2023

Trading Off Voting Axioms for Privacy

In this paper, we investigate tradeoffs among differential privacy (DP) ...
research
10/07/2012

Privacy Aware Learning

We study statistical risk minimization problems under a privacy model in...

Please sign up or login with your details

Forgot password? Click here to reset