On the Effectiveness of Regularization Against Membership Inference Attacks

06/09/2020
by   Yiğitcan Kaya, et al.
0

Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior work has conjectured that regularization techniques, which combat overfitting, may also mitigate the leakage. While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically, and the resulting privacy properties are not well understood. We explore the lower bound for information leakage that practical attacks can achieve. First, we evaluate the effectiveness of 8 mechanisms in mitigating two recent MIAs, on three standard image classification tasks. We find that certain mechanisms, such as label smoothing, may inadvertently help MIAs. Second, we investigate the potential of improving the resilience to MIAs by combining complementary mechanisms. Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting. Our metric reveals that, even when existing MIAs fail, the training samples may remain distinguishable from test samples. This suggests that regularization mechanisms can provide a false sense of privacy, even when they appear effective against existing MIAs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

On Primes, Log-Loss Scores and (No) Privacy

Membership Inference Attacks exploit the vulnerabilities of exposing mod...
research
05/29/2019

Ultimate Power of Inference Attacks: Privacy Risks of High-Dimensional Models

Models leak information about their training data. This enables attacker...
research
03/24/2020

Systematic Evaluation of Privacy Risks of Machine Learning Models

Machine learning models are prone to memorizing sensitive data, making t...
research
03/22/2023

Do Backdoors Assist Membership Inference Attacks?

When an adversary provides poison samples to a machine learning model, p...
research
06/07/2023

Membership inference attack with relative decision boundary distance

Membership inference attack is one of the most popular privacy attacks i...
research
10/12/2020

Quantifying Membership Privacy via Information Leakage

Machine learning models are known to memorize the unique properties of i...
research
09/11/2020

MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

Generative models are widely used for publishing synthetic datasets. Des...

Please sign up or login with your details

Forgot password? Click here to reset