Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes

11/28/2019
by   Peeter Laud, et al.
0

There are numerous methods of achieving ϵ-differential privacy (DP). The question is what is the appropriate value of ϵ, since there is no common agreement on a "sufficiently small" ϵ, and its goodness depends on the query as well as the data. In this paper, we show how to compute ϵ that corresponds to δ, defined as the adversary's advantage in probability of guessing some specific property of the output. The attacker's goal can be stated as Boolean expression over guessing particular attributes, possibly within some precision. The attributes combined in this way should be independent. We assume that both the input and the output distributions have corresponding probability density functions, or probability mass functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2022

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

Differential privacy (DP) is typically formulated as a worst-case privac...
research
07/03/2019

Capacity Bounded Differential Privacy

Differential privacy, a notion of algorithmic stability, is a gold stand...
research
09/22/2022

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

When learning from sensitive data, care must be taken to ensure that tra...
research
12/17/2021

Privacy Leakage over Dependent Attributes in One-Sided Differential Privacy

Providing a provable privacy guarantees while maintaining the utility of...
research
03/12/2012

Differential Privacy for Functions and Functional Data

Differential privacy is a framework for privately releasing summaries of...
research
07/20/2022

Improved Generalization Guarantees in Restricted Data Models

Differential privacy is known to protect against threats to validity inc...

Please sign up or login with your details

Forgot password? Click here to reset