Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

11/27/2020
by   Mingfu Xue, et al.
0

Recently, the membership inference attack poses a serious threat to the privacy of confidential training data of machine learning models. This paper proposes a novel adversarial example based privacy-preserving technique (AEPPT), which adds the crafted adversarial perturbations to the prediction of the target model to mislead the adversary's membership inference model. The added adversarial perturbations do not affect the accuracy of target model, but can prevent the adversary from inferring whether a specific data is in the training set of the target model. Since AEPPT only modifies the original output of the target model, the proposed method is general and does not require modifying or retraining the target model. Experimental results show that the proposed method can reduce the inference accuracy and precision of the membership inference model to 50 for those adaptive attacks where the adversary knows the defense mechanism, the proposed AEPPT is also demonstrated to be effective. Compared with the state-of-the-art defense methods, the proposed defense can significantly degrade the accuracy and precision of membership inference attacks to 50 (i.e., the same as a random guess) while the performance and utility of the target model will not be affected.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

page 10

page 11

page 14

research
09/01/2020

Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

Machine learning models have been shown to leak information violating th...
research
07/12/2022

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

As a long-term threat to the privacy of training data, membership infere...
research
11/30/2020

TransMIA: Membership Inference Attacks Using Transfer Shadow Training

Transfer learning has been widely studied and gained increasing populari...
research
05/14/2022

Evaluating Membership Inference Through Adversarial Robustness

The usage of deep learning is being escalated in many applications. Due ...
research
07/21/2020

Membership Inference with Privately Augmented Data Endorses the Benign while Suppresses the Adversary

Membership inference (MI) in machine learning decides whether a given ex...
research
02/11/2021

Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

State-of-the-art machine learning models are vulnerable to data poisonin...
research
02/11/2022

Privacy-preserving Generative Framework Against Membership Inference Attacks

Artificial intelligence and machine learning have been integrated into a...

Please sign up or login with your details

Forgot password? Click here to reset