Label-Only Membership Inference Attacks

Membership inference attacks are one of the simplest forms of privacy leakage for machine learning models: given a data point and model, determine whether the point was used to train the model. Existing membership inference attacks exploit models' abnormal confidence when queried on their training data. These attacks do not apply if the adversary only gets access to models' predicted labels, without a confidence measure. In this paper, we introduce label-only membership inference attacks. Instead of relying on confidence scores, our attacks evaluate the robustness of a model's predicted labels under perturbations to obtain a fine-grained membership signal. These perturbations include common data augmentations or adversarial examples. We empirically show that our label-only membership inference attacks perform on par with prior attacks that required access to model confidences. We further demonstrate that label-only attacks break multiple defenses against membership inference attacks that (implicitly or explicitly) rely on a phenomenon we call confidence masking. These defenses modify a model's confidence scores in order to thwart attacks, but leave the model's predicted labels unchanged. Our label-only attacks demonstrate that confidence-masking is not a viable defense strategy against membership inference. Finally, we investigate worst-case label-only attacks, that infer membership for a small number of outlier data points. We show that label-only attacks also match confidence-based attacks in this setting. We find that training models with differential privacy and (strong) L2 regularization are the only known defense strategies that successfully prevents all attacks. This remains true even when the differential privacy budget is too high to offer meaningful provable guarantees.

READ FULL TEXT

page 1

page 12

page 13

research
03/13/2022

One Parameter Defense – Defending against Data Inference Attacks via Differential Privacy

Machine learning models are vulnerable to data inference attacks, such a...
research
05/24/2019

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

The arms race between attacks and defenses for machine learning models h...
research
12/03/2022

LDL: A Defense for Label-Based Membership Inference Attacks

The data used to train deep neural network (DNN) models in applications ...
research
02/02/2022

Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference

A surprising phenomenon in modern machine learning is the ability of a h...
research
07/27/2022

Membership Inference Attacks via Adversarial Examples

The raise of machine learning and deep learning led to significant impro...
research
11/30/2020

TransMIA: Membership Inference Attacks Using Transfer Shadow Training

Transfer learning has been widely studied and gained increasing populari...
research
07/04/2023

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Machine learning (ML) models are vulnerable to membership inference atta...

Please sign up or login with your details

Forgot password? Click here to reset