Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

01/23/2022
by   Shagufta Mehnaz, et al.
0

Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data. In this paper, we focus on model inversion attacks where the adversary knows non-sensitive attributes about records in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using only black-box access to the target classification model. We first devise a novel confidence score-based model inversion attribute inference attack that significantly outperforms the state-of-the-art. We then introduce a label-only model inversion attack that relies only on the model's predicted labels but still matches our confidence score-based attack in terms of attack effectiveness. We also extend our attacks to the scenario where some of the other (non-sensitive) attributes of a target record are unknown to the adversary. We evaluate our attacks on two types of machine learning models, decision tree and deep neural network, trained on three real datasets. Moreover, we empirically demonstrate the disparate vulnerability of model inversion attacks, i.e., specific groups in the training dataset (grouped by gender, race, etc.) could be more vulnerable to model inversion attacks.

READ FULL TEXT
research
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...
research
03/01/2022

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

Collaborative machine learning settings like federated learning can be s...
research
05/25/2021

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

It is known that deep neural networks, trained for the classification of...
research
10/09/2019

Membership Model Inversion Attacks for Deep Networks

With the increasing adoption of AI, inherent security and privacy vulner...
research
10/02/2020

Quantifying Privacy Leakage in Graph Embedding

Graph embeddings have been proposed to map graph data to low dimensional...
research
03/25/2022

Canary Extraction in Natural Language Understanding Models

Natural Language Understanding (NLU) models can be trained on sensitive ...

Please sign up or login with your details

Forgot password? Click here to reset