Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

07/04/2023
by   Zitao Chen, et al.
0

Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model. While there have been many efforts to mitigate MIAs, they often suffer from limited privacy protection, large accuracy drop, and/or requiring additional data that may be difficult to acquire. This work proposes a defense technique, HAMP that can achieve both strong membership privacy and high accuracy, without requiring extra data. To mitigate MIAs in different forms, we observe that they can be unified as they all exploit the ML model's overconfidence in predicting training samples through different proxies. This motivates our design to enforce less confident prediction by the model, hence forcing the model to behave similarly on the training and testing samples. HAMP consists of a novel training framework with high-entropy soft labels and an entropy-based regularizer to constrain the model's prediction while still achieving high accuracy. To further reduce privacy risk, HAMP uniformly modifies all the prediction outputs to become low-confidence outputs while preserving the accuracy, which effectively obscures the differences between the prediction on members and non-members. We conduct extensive evaluation on five benchmark datasets, and show that HAMP provides consistently high accuracy and strong membership privacy. Our comparison with seven state-of-the-art defenses shows that HAMP achieves a superior privacy-utility trade off than those techniques.

READ FULL TEXT
research
10/15/2021

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

Membership inference attacks are a key measure to evaluate privacy leaka...
research
03/02/2022

MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

In membership inference attacks (MIAs), an adversary observes the predic...
research
02/07/2022

Membership Inference Attacks and Defenses in Neural Network Pruning

Neural network pruning has been an essential technique to reduce the com...
research
11/02/2021

Knowledge Cross-Distillation for Membership Privacy

A membership inference attack (MIA) poses privacy risks on the training ...
research
07/28/2020

Label-Only Membership Inference Attacks

Membership inference attacks are one of the simplest forms of privacy le...
research
11/18/2019

Privacy Leakage Avoidance with Switching Ensembles

We consider membership inference attacks, one of the main privacy issues...
research
09/27/2019

Membership Encoding for Deep Learning

Machine learning as a service (MLaaS), and algorithm marketplaces are on...

Please sign up or login with your details

Forgot password? Click here to reset