Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference

10/11/2021
by   Jiaxiang Liu, et al.
0

Differentially private training algorithms provide protection against one of the most popular attacks in machine learning: the membership inference attack. However, these privacy algorithms incur a loss of the model's classification accuracy, therefore creating a privacy-utility trade-off. The amount of noise that differential privacy requires to provide strong theoretical protection guarantees in deep learning typically renders the models unusable, but authors have observed that even lower noise levels provide acceptable empirical protection against existing membership inference attacks. In this work, we look for alternatives to differential privacy towards empirically protecting against membership inference attacks. We study the protection that simply following good machine learning practices (not designed with privacy in mind) offers against membership inference. We evaluate the performance of state-of-the-art techniques, such as pre-training and sharpness-aware minimization, alone and with differentially private training algorithms, and find that, when using early stopping, the algorithms without differential privacy can provide both higher utility and higher privacy than their differentially private counterparts. These findings challenge the belief that differential privacy is a good defense to protect against existing membership inference attacks

READ FULL TEXT
research
06/13/2020

Auditing Differentially Private Machine Learning: How Private is Private SGD?

We investigate whether Differentially Private SGD offers better privacy ...
research
03/16/2021

The Influence of Dropout on Membership Inference in Differentially Private Models

Differentially private models seek to protect the privacy of data the mo...
research
03/13/2022

One Parameter Defense – Defending against Data Inference Attacks via Differential Privacy

Machine learning models are vulnerable to data inference attacks, such a...
research
09/07/2022

On the utility and protection of optimization with differential privacy and classic regularization techniques

Nowadays, owners and developers of deep learning models must consider st...
research
09/13/2022

Differentially Private Genomic Data Release For GWAS Reproducibility

With the rapid development of technology in genome-related fields, resea...
research
02/14/2022

Optimizing Random Mixup with Gaussian Differential Privacy

Differentially private data release receives rising attention in machine...
research
05/28/2023

Training Private Models That Know What They Don't Know

Training reliable deep learning models which avoid making overconfident ...

Please sign up or login with your details

Forgot password? Click here to reset