MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

03/02/2022
by   Ismat Jarin, et al.
0

In membership inference attacks (MIAs), an adversary observes the predictions of a model to determine whether a sample is part of the model's training data. Existing MIA defenses conceal the presence of a target sample through strong regularization, knowledge distillation, confidence masking, or differential privacy. We propose MIAShield, a new MIA defense based on preemptive exclusion of member samples instead of masking the presence of a member. The key insight in MIAShield is weakening the strong membership signal that stems from the presence of a target sample by preemptively excluding it at prediction time without compromising model utility. To that end, we design and evaluate a suite of preemptive exclusion oracles leveraging model-confidence, exact or approximate sample signature, and learning-based exclusion of member data points. To be practical, MIAShield splits a training data into disjoint subsets and trains each subset to build an ensemble of models. The disjointedness of subsets ensures that a target sample belongs to only one subset, which isolates the sample to facilitate the preemptive exclusion goal. We evaluate MIAShield on three benchmark image classification datasets. We show that MIAShield effectively mitigates membership inference (near random guess) for a wide range of MIAs, achieves far better privacy-utility trade-off compared with state-of-the-art defenses, and remains resilient against an adaptive adversary.

READ FULL TEXT
research
06/15/2019

Reconciling Utility and Membership Privacy via Knowledge Distillation

Large capacity machine learning models are prone to membership inference...
research
10/15/2021

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

Membership inference attacks are a key measure to evaluate privacy leaka...
research
05/12/2021

Accuracy-Privacy Trade-off in Deep Ensembles

Deep ensemble learning has been shown to improve accuracy by training mu...
research
07/04/2023

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Machine learning (ML) models are vulnerable to membership inference atta...
research
12/03/2022

LDL: A Defense for Label-Based Membership Inference Attacks

The data used to train deep neural network (DNN) models in applications ...
research
01/05/2021

Practical Blind Membership Inference Attack via Differential Comparisons

Membership inference (MI) attacks affect user privacy by inferring wheth...
research
02/20/2019

Under the Hood of Membership Inference Attacks on Aggregate Location Time-Series

Aggregate location statistics are used in a number of mobility analytics...

Please sign up or login with your details

Forgot password? Click here to reset