Label-Leaks: Membership Inference Attack with Label

07/30/2020
by   Zheng Li, et al.
31

Machine learning (ML) has made tremendous progress during the past decade and ML models have been deployed in many real-world applications. However, recent research has shown that ML models are vulnerable to attacks against their underlying training data. One major attack in this field is membership inference the goal of which is to determine whether a data sample is part of the training set of a target machine learning model. So far, most of the membership inference attacks against ML classifiers leverage the posteriors returned by the target model as their input. However, empirical results show that these attacks can be easily mitigated if the target model only returns the predicted label instead of posteriors. In this paper, we perform a systematic investigation of membership inference attack when the target model only provides the predicted label. We name our attack label-only membership inference attack. We focus on two adversarial settings and propose different attacks, namely transfer-based attack and perturbation based attack. The transfer-based attack follows the intuition that if a locally established shadow model is similar enough to the target model, then the adversary can leverage the shadow model's information to predict a target sample's membership. The perturbation-based attack relies on adversarial perturbation techniques to modify the target sample to a different class and uses the magnitude of the perturbation to judge whether it is a member or not. This is based on the intuition that a member sample is harder to be perturbed to a different class than a non-member sample. Extensive experiments over 6 different datasets demonstrate that both of our attacks achieve strong performance. This further demonstrates the severity of membership privacy risks of machine learning models.

READ FULL TEXT

page 4

page 5

page 9

page 10

page 11

page 12

page 16

page 18

research
05/13/2022

l-Leaks: Membership Inference Attacks with Logits

Machine Learning (ML) has made unprecedented progress in the past severa...
research
09/10/2020

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

While being deployed in many critical applications as core components, m...
research
05/05/2020

When Machine Unlearning Jeopardizes Privacy

The right to be forgotten states that a data owner has the right to eras...
research
06/07/2023

Membership inference attack with relative decision boundary distance

Membership inference attack is one of the most popular privacy attacks i...
research
06/01/2020

BadNL: Backdoor Attacks Against NLP Models

Machine learning (ML) has progressed rapidly during the past decade and ...
research
02/08/2021

Quantifying and Mitigating Privacy Risks of Contrastive Learning

Data is the key factor to drive the development of machine learning (ML)...
research
07/03/2023

Systematic Bias in Sample Inference and its Effect on Machine Learning

A commonly observed pattern in machine learning models is an underpredic...

Please sign up or login with your details

Forgot password? Click here to reset