Assessing differentially private deep learning with Membership Inference

12/24/2019
by   Daniel Bernau, et al.
0

Releasing data in the form of trained neural networks with differential privacy promises meaningful anonymization. However, there is an inherent privacy-accuracy trade-off in differential privacy which is challenging to assess for non-privacy experts. Furthermore, local and central differential privacy mechanisms are available to either anonymize the training data or the learnt neural network, and the privacy parameter ϵ cannot be used to compare these two mechanisms. We propose to measure privacy through a black-box membership inference attack and compare the privacy-accuracy trade-off for different local and central differential privacy mechanisms. Furthermore, we need to evaluate whether differential privacy is a useful mechanism in practice since differential privacy will especially be used by data scientists if membership inference risk is lowered more than accuracy. We experiment with several datasets and show that neither local differential privacy nor central differential privacy yields a consistently better privacy-accuracy trade-off in all cases. We also show that the relative privacy-accuracy trade-off, instead of strictly declining linearly over ϵ, is only favorable within a small interval. For this purpose we propose φ, a ratio expressing the relative privacy-accuracy trade-off.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

04/16/2022

Assessing Differentially Private Variational Autoencoders under Membership Inference

We present an approach to quantify and compare the privacy-accuracy trad...
03/04/2021

On the privacy-utility trade-off in differentially private hierarchical text classification

Hierarchical models for text classification can leak sensitive or confid...
01/27/2021

Randori: Local Differential Privacy for All

Polls are a common way of collecting data, including product reviews and...
09/11/2018

Usable Differential Privacy: A Case Study with PSI

Differential privacy is a promising framework for addressing the privacy...
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
12/25/2017

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...
05/25/2022

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

We study the privacy risks that are associated with training a neural ne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.