Assessing differentially private deep learning with Membership Inference

by   Daniel Bernau, et al.

Releasing data in the form of trained neural networks with differential privacy promises meaningful anonymization. However, there is an inherent privacy-accuracy trade-off in differential privacy which is challenging to assess for non-privacy experts. Furthermore, local and central differential privacy mechanisms are available to either anonymize the training data or the learnt neural network, and the privacy parameter ϵ cannot be used to compare these two mechanisms. We propose to measure privacy through a black-box membership inference attack and compare the privacy-accuracy trade-off for different local and central differential privacy mechanisms. Furthermore, we need to evaluate whether differential privacy is a useful mechanism in practice since differential privacy will especially be used by data scientists if membership inference risk is lowered more than accuracy. We experiment with several datasets and show that neither local differential privacy nor central differential privacy yields a consistently better privacy-accuracy trade-off in all cases. We also show that the relative privacy-accuracy trade-off, instead of strictly declining linearly over ϵ, is only favorable within a small interval. For this purpose we propose φ, a ratio expressing the relative privacy-accuracy trade-off.



page 1

page 2

page 3

page 4


Assessing Differentially Private Variational Autoencoders under Membership Inference

We present an approach to quantify and compare the privacy-accuracy trad...

On the privacy-utility trade-off in differentially private hierarchical text classification

Hierarchical models for text classification can leak sensitive or confid...

Randori: Local Differential Privacy for All

Polls are a common way of collecting data, including product reviews and...

Usable Differential Privacy: A Case Study with PSI

Differential privacy is a promising framework for addressing the privacy...

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

We study the privacy risks that are associated with training a neural ne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.