Towards the Infeasibility of Membership Inference on Deep Models

05/27/2020
by   Shahbaz Rezaei, et al.
0

Recent studies propose membership inference (MI) attacks on deep models. Despite the moderate accuracy of such MI attacks, we show that the way the attack accuracy is reported is often misleading and a simple blind attack which is highly unreliable and inefficient in reality can often represent similar accuracy. We show that the current MI attack models can only identify the membership of misclassified samples with mediocre accuracy at best, which only constitute a very small portion of training samples. We analyze several new features that have not been explored for membership inference before, including distance to the decision boundary and gradient norms, and conclude that deep models' responses are mostly indistinguishable among train and non-train samples. Moreover, in contrast with general intuition that deeper models have a capacity to memorize training samples, and, hence, they are more vulnerable to membership inference, we find no evidence to support that and in some cases deeper models are often harder to launch membership inference attack on. Furthermore, despite the common belief, we show that overfitting does not necessarily lead to higher degree of membership leakage. We conduct experiments on MNIST, CIFAR-10, CIFAR-100, and ImageNet, using various model architecture, including LeNet, ResNet, DenseNet, InceptionV3, and Xception. Source code: https://github.com/shrezaei/MI-Attackbluehttps://github.com/shrezaei/MI-Attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2022

An Efficient Subpopulation-based Membership Inference Attack

Membership inference attacks allow a malicious entity to predict whether...
research
08/07/2021

Membership Inference Attacks on Lottery Ticket Networks

The vulnerability of the Lottery Ticket Hypothesis has not been studied ...
research
05/12/2021

Accuracy-Privacy Trade-off in Deep Ensembles

Deep ensemble learning has been shown to improve accuracy by training mu...
research
03/22/2023

Do Backdoors Assist Membership Inference Attacks?

When an adversary provides poison samples to a machine learning model, p...
research
05/26/2022

Membership Inference Attack Using Self Influence Functions

Member inference (MI) attacks aim to determine if a specific data sample...
research
12/18/2020

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

With the successful application of deep learning models in many real-wor...
research
12/07/2021

Defending against Model Stealing via Verifying Embedded External Features

Obtaining a well-trained model involves expensive data collection and tr...

Please sign up or login with your details

Forgot password? Click here to reset