Practical Blind Membership Inference Attack via Differential Comparisons

01/05/2021
by   Bo Hui, et al.
0

Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model, e.g., a deep neural network. There are two types of MI attacks in the literature, i.e., these with and without shadow models. The success of the former heavily depends on the quality of the shadow model, i.e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information. In this paper, we propose an MI attack, called BlindMI, which probes the target model and extracts membership semantics via a novel approach, called differential comparison. The high-level idea is that BlindMI first generates a dataset with nonmembers via transforming existing samples into new samples, and then differentially moves samples from a target dataset to the generated, non-member set in an iterative manner. If the differential move of a sample increases the set distance, BlindMI considers the sample as non-member and vice versa. BlindMI was evaluated by comparing it with state-of-the-art MI attack algorithms. Our evaluation shows that BlindMI improves F1-score by nearly 20 when compared to state-of-the-art on some datasets, such as Purchase-50 and Birds-200, in the blind setting where the adversary does not know the target model's architecture and the target dataset's ground truth labels. We also show that BlindMI can defeat state-of-the-art defenses.

READ FULL TEXT
research
07/12/2023

SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark

Membership inference (MI) attacks threaten user privacy through determin...
research
03/22/2023

Do Backdoors Assist Membership Inference Attacks?

When an adversary provides poison samples to a machine learning model, p...
research
03/04/2022

An Efficient Subpopulation-based Membership Inference Attack

Membership inference attacks allow a malicious entity to predict whether...
research
06/07/2023

Membership inference attack with relative decision boundary distance

Membership inference attack is one of the most popular privacy attacks i...
research
03/02/2022

MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

In membership inference attacks (MIAs), an adversary observes the predic...
research
02/15/2021

Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems

Membership inference attacks (MIA) try to detect if data samples were us...
research
06/27/2023

Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

Evasion attacks are a threat to machine learning models, where adversari...

Please sign up or login with your details

Forgot password? Click here to reset