Membership Inference via Backdooring

by   Hongsheng Hu, et al.

Recently issued data privacy regulations like GDPR (General Data Protection Regulation) grant individuals the right to be forgotten. In the context of machine learning, this requires a model to forget about a training data sample if requested by the data owner (i.e., machine unlearning). As an essential step prior to machine unlearning, it is still a challenge for a data owner to tell whether or not her data have been used by an unauthorized party to train a machine learning model. Membership inference is a recently emerging technique to identify whether a data sample was used to train a target model, and seems to be a promising solution to this challenge. However, straightforward adoption of existing membership inference approaches fails to address the challenge effectively due to being originally designed for attacking membership privacy and suffering from several severe limitations such as low inference accuracy on well-generalized models. In this paper, we propose a novel membership inference approach inspired by the backdoor technology to address the said challenge. Specifically, our approach of Membership Inference via Backdooring (MIB) leverages the key observation that a backdoored model behaves very differently from a clean model when predicting on deliberately marked samples created by a data owner. Appealingly, MIB requires data owners' marking a small number of samples for membership inference and only black-box access to the target model, with theoretical guarantees for inference results. We perform extensive experiments on various datasets and deep neural network architectures, and the results validate the efficacy of our approach, e.g., marking only 0.1 training dataset is practically sufficient for effective membership inference.


page 3

page 9


Membership Inference Attacks against Machine Learning Models

We quantitatively investigate how machine learning models leak informati...

The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model

To help enforce data-protection regulations such as GDPR and detect unau...

Membership Inference Attack for Beluga Whales Discrimination

To efficiently monitor the growth and evolution of a particular wildlife...

Membership Inference with Privately Augmented Data Endorses the Benign while Suppresses the Adversary

Membership inference (MI) in machine learning decides whether a given ex...

When Machine Unlearning Jeopardizes Privacy

The right to be forgotten states that a data owner has the right to eras...

Membership Inference Attacks on Sequence-to-Sequence Models

Data privacy is an important issue for "machine learning as a service" p...

Do Not Trust Prediction Scores for Membership Inference Attacks

Membership inference attacks (MIAs) aim to determine whether a specific ...

Please sign up or login with your details

Forgot password? Click here to reset