Are Attribute Inference Attacks Just Imputation?

09/02/2022
by   Bargav Jayaraman, et al.
9

Models can expose sensitive information about their training data. In an attribute inference attack, an adversary has partial knowledge of some training records and access to a model trained on those records, and infers the unknown values of a sensitive feature of those records. We study a fine-grained variant of attribute inference we call sensitive value inference, where the adversary's goal is to identify with high confidence some records from a candidate set where the unknown attribute has a particular sensitive value. We explicitly compare attribute inference with data imputation that captures the training distribution statistics, under various assumptions about the training data available to the adversary. Our main conclusions are: (1) previous attribute inference methods do not reveal more about the training data from the model than can be inferred by an adversary without access to the trained model, but with the same knowledge of the underlying distribution as needed to train the attribute inference attack; (2) black-box attribute inference attacks rarely learn anything that cannot be learned without the model; but (3) white-box attacks, which we introduce and evaluate in the paper, can reliably identify some records with the sensitive value attribute that would not be predicted without having access to the model. Furthermore, we show that proposed defenses such as differentially private training and removing vulnerable records from training do not mitigate this privacy risk. The code for our experiments is available at <https://github.com/bargavj/EvaluatingDPML>.

READ FULL TEXT
research
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
research
05/26/2022

Membership Inference Attack Using Self Influence Functions

Member inference (MI) attacks aim to determine if a specific data sample...
research
09/20/2023

Information Leakage from Data Updates in Machine Learning Models

In this paper we consider the setting where machine learning models are ...
research
06/01/2023

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

Graph neural networks (GNNs) have shown promising results on real-life d...
research
06/29/2020

Reducing Risk of Model Inversion Using Privacy-Guided Training

Machine learning models often pose a threat to the privacy of individual...
research
03/25/2022

Canary Extraction in Natural Language Understanding Models

Natural Language Understanding (NLU) models can be trained on sensitive ...
research
10/02/2020

Quantifying Privacy Leakage in Graph Embedding

Graph embeddings have been proposed to map graph data to low dimensional...

Please sign up or login with your details

Forgot password? Click here to reset