Defending Model Inversion and Membership Inference Attacks via Prediction Purification

05/08/2020
by   Ziqi Yang, et al.
0

Neural networks are susceptible to data inference attacks such as the model inversion attack and the membership inference attack, where the attacker could infer the reconstruction and the membership of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a common approach, namely purification framework, to defend data inference attacks. It purifies the confidence score vectors predicted by the target classifier, with the goal of removing redundant information that could be exploited by the attacker to perform the inferences. Specifically, we design a purifier model which takes a confidence score vector as input and reshapes it to meet the defense goals. It does not retrain the target classifier. The purifier can be used to mitigate the model inversion attack, the membership inference attack or both attacks. We evaluate our approach on deep neural networks using benchmark datasets. We show that the purification framework can effectively defend the model inversion attack and the membership inference attack, while introducing negligible utility loss to the target classifier (e.g., less than 0.3 empirically show that it is possible to defend data inference attacks with negligible change to the generalization ability of the classification function.

READ FULL TEXT
research
12/01/2022

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Neural networks are susceptible to data inference attacks such as the me...
research
09/23/2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

In a membership inference attack, an attacker aims to infer whether a da...
research
06/27/2019

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

Membership inference (MI) attacks exploit a learned model's lack of gene...
research
09/22/2022

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

Authentication systems are vulnerable to model inversion attacks where a...
research
02/27/2020

Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap

This work studies membership inference (MI) attack against classifiers, ...
research
10/22/2020

MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

To address the issue that deep neural networks (DNNs) are vulnerable to ...
research
12/22/2022

GAN-based Domain Inference Attack

Model-based attacks can infer training data information from deep neural...

Please sign up or login with your details

Forgot password? Click here to reset