Assessing differentially private deep learning with Membership Inference

12/24/2019
by   Daniel Bernau, et al.
0

Releasing data in the form of trained neural networks with differential privacy promises meaningful anonymization. However, there is an inherent privacy-accuracy trade-off in differential privacy which is challenging to assess for non-privacy experts. Furthermore, local and central differential privacy mechanisms are available to either anonymize the training data or the learnt neural network, and the privacy parameter ϵ cannot be used to compare these two mechanisms. We propose to measure privacy through a black-box membership inference attack and compare the privacy-accuracy trade-off for different local and central differential privacy mechanisms. Furthermore, we need to evaluate whether differential privacy is a useful mechanism in practice since differential privacy will especially be used by data scientists if membership inference risk is lowered more than accuracy. We experiment with several datasets and show that neither local differential privacy nor central differential privacy yields a consistently better privacy-accuracy trade-off in all cases. We also show that the relative privacy-accuracy trade-off, instead of strictly declining linearly over ϵ, is only favorable within a small interval. For this purpose we propose φ, a ratio expressing the relative privacy-accuracy trade-off.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2022

Assessing Differentially Private Variational Autoencoders under Membership Inference

We present an approach to quantify and compare the privacy-accuracy trad...
research
03/04/2021

On the privacy-utility trade-off in differentially private hierarchical text classification

Hierarchical models for text classification can leak sensitive or confid...
research
01/27/2021

Randori: Local Differential Privacy for All

Polls are a common way of collecting data, including product reviews and...
research
09/11/2018

Usable Differential Privacy: A Case Study with PSI

Differential privacy is a promising framework for addressing the privacy...
research
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
research
05/03/2022

Universal Optimality and Robust Utility Bounds for Metric Differential Privacy

We study the privacy-utility trade-off in the context of metric differen...
research
10/11/2019

ABCDP: Approximate Bayesian Computation Meets Differential Privacy

We develop a novel approximate Bayesian computation (ABC) framework, ABC...

Please sign up or login with your details

Forgot password? Click here to reset