DeepAI AI Chat
Log In Sign Up

Does Label Differential Privacy Prevent Label Inference Attacks?

by   Ruihan Wu, et al.
cornell university

Label differential privacy (LDP) is a popular framework for training private ML models on datasets with public features and sensitive private labels. Despite its rigorous privacy guarantee, it has been observed that in practice LDP does not preclude label inference attacks (LIAs): Models trained with LDP can be evaluated on the public training features to recover, with high accuracy, the very private labels that it was designed to protect. In this work, we argue that this phenomenon is not paradoxical and that LDP merely limits the advantage of an LIA adversary compared to predicting training labels using the Bayes classifier. At LDP ϵ=0 this advantage is zero, hence the optimal attack is to predict according to the Bayes classifier and is independent of the training labels. Finally, we empirically demonstrate that our result closely captures the behavior of simulated attacks on both synthetic and real world datasets.


page 1

page 2

page 3

page 4


Differentially Private Label Protection in Split Learning

Split learning is a distributed training framework that allows multiple ...

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

When learning from sensitive data, care must be taken to ensure that tra...

A Shuffling Framework for Local Differential Privacy

ldp deployments are vulnerable to inference attacks as an adversary can ...

Finding Solutions to Generative Adversarial Privacy

We present heuristics for solving the maximin problem induced by the gen...

Variational Bayes In Private Settings (VIPS)

We provide a general framework for privacy-preserving variational Bayes ...

Label differential privacy via clustering

We present new mechanisms for label differential privacy, a relaxation o...