Can Membership Inferencing be Refuted?

03/07/2023
by   Zhifeng Kong, et al.
0

Membership inference (MI) attack is currently the most popular test for measuring privacy leakage in machine learning models. Given a machine learning model, a data point and some auxiliary information, the goal of an MI attack is to determine whether the data point was used to train the model. In this work, we study the reliability of membership inference attacks in practice. Specifically, we show that a model owner can plausibly refute the result of a membership inference test on a data point x by constructing a proof of repudiation that proves that the model was trained without x. We design efficient algorithms to construct proofs of repudiation for all data points of the training dataset. Our empirical evaluation demonstrates the practical feasibility of our algorithm by constructing proofs of repudiation for popular machine learning models on MNIST and CIFAR-10. Consequently, our results call for a re-evaluation of the implications of membership inference attacks in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2016

Membership Inference Attacks against Machine Learning Models

We quantitatively investigate how machine learning models leak informati...
research
11/18/2019

Privacy Leakage Avoidance with Switching Ensembles

We consider membership inference attacks, one of the main privacy issues...
research
02/02/2022

Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference

A surprising phenomenon in modern machine learning is the ability of a h...
research
09/17/2018

Déjà Vu: an empirical evaluation of the memorization properties of ConvNets

Convolutional neural networks memorize part of their training data, whic...
research
12/06/2022

On the Discredibility of Membership Inference Attacks

With the wide-spread application of machine learning models, it has beco...
research
05/20/2021

Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance

Recent research has successfully demonstrated new types of data poisonin...
research
10/31/2019

Reducing audio membership inference attack accuracy to chance: 4 defenses

It is critical to understand the privacy and robustness vulnerabilities ...

Please sign up or login with your details

Forgot password? Click here to reset