Do Backdoors Assist Membership Inference Attacks?

03/22/2023
by   Yumeki Goto, et al.
0

When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss a backdoor-assisted membership inference attack, a novel membership inference attack based on backdoors that return the adversary's expected output for a triggered sample. We found three crucial insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful. Second, when we analyzed loss distributions to understand the reason for the unsuccessful results, we found that backdoors cannot separate loss distributions of training and non-training samples. In other words, backdoors cannot affect the distribution of clean samples. Third, we also show that poison and triggered samples activate neurons of different distributions. Specifically, backdoors make any clean sample an inlier, contrary to poisoning samples. As a result, we confirm that backdoors cannot assist membership inference.

READ FULL TEXT
research
09/17/2020

On Primes, Log-Loss Scores and (No) Privacy

Membership Inference Attacks exploit the vulnerabilities of exposing mod...
research
09/27/2019

Alleviating Privacy Attacks via Causal Learning

Machine learning models, especially deep neural networks have been shown...
research
01/05/2021

Practical Blind Membership Inference Attack via Differential Comparisons

Membership inference (MI) attacks affect user privacy by inferring wheth...
research
03/04/2022

An Efficient Subpopulation-based Membership Inference Attack

Membership inference attacks allow a malicious entity to predict whether...
research
05/27/2020

Towards the Infeasibility of Membership Inference on Deep Models

Recent studies propose membership inference (MI) attacks on deep models....
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...
research
05/29/2023

Membership Inference Attacks against Language Models via Neighbourhood Comparison

Membership Inference attacks (MIAs) aim to predict whether a data sample...

Please sign up or login with your details

Forgot password? Click here to reset