Membership inference attack with relative decision boundary distance

06/07/2023
by   Jiacheng Xu, et al.
0

Membership inference attack is one of the most popular privacy attacks in machine learning, which aims to predict whether a given sample was contained in the target model's training set. Label-only membership inference attack is a variant that exploits sample robustness and attracts more attention since it assumes a practical scenario in which the adversary only has access to the predicted labels of the input samples. However, since the decision boundary distance, which measures robustness, is strongly affected by the random initial image, the adversary may get opposite results even for the same input samples. In this paper, we propose a new attack method, called muti-class adaptive membership inference attack in the label-only setting. All decision boundary distances for all target classes have been traversed in the early attack iterations, and the subsequent attack iterations continue with the shortest decision boundary distance to obtain a stable and optimal decision boundary distance. Instead of using a single boundary distance, the relative boundary distance between samples and neighboring points has also been employed as a new membership score to distinguish between member samples inside the training set and nonmember samples outside the training set. Experiments show that previous label-only membership inference attacks using the untargeted HopSkipJump algorithm fail to achieve optimal decision bounds in more than half of the samples, whereas our multi-targeted HopSkipJump algorithm succeeds in almost all samples. In addition, extensive experiments show that our multi-class adaptive MIA outperforms current label-only membership inference attacks in the CIFAR10, and CIFAR100 datasets, especially for the true positive rate at low false positive rates metric.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2020

Label-Leaks: Membership Inference Attack with Label

Machine learning (ML) has made tremendous progress during the past decad...
research
11/15/2021

On the Importance of Difficulty Calibration in Membership Inference Attacks

The vulnerability of machine learning models to membership inference att...
research
12/06/2022

On the Discredibility of Membership Inference Attacks

With the wide-spread application of machine learning models, it has beco...
research
02/07/2022

Towards an Analytical Definition of Sufficient Data

We show that, for each of five datasets of increasing complexity, certai...
research
01/05/2021

Practical Blind Membership Inference Attack via Differential Comparisons

Membership inference (MI) attacks affect user privacy by inferring wheth...
research
04/21/2021

Dataset Inference: Ownership Resolution in Machine Learning

With increasingly more data and computation involved in their training, ...
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...

Please sign up or login with your details

Forgot password? Click here to reset