Reducing Risk of Model Inversion Using Privacy-Guided Training

06/29/2020
by   Abigail Goldsteen, et al.
0

Machine learning models often pose a threat to the privacy of individuals whose data is part of the training set. Several recent attacks have been able to infer sensitive information from trained models, including model inversion or attribute inference attacks. These attacks are able to reveal the values of certain sensitive features of individuals who participated in training the model. It has also been shown that several factors can contribute to an increased risk of model inversion, including feature influence. We observe that not all features necessarily share the same level of privacy or sensitivity. In many cases, certain features used to train a model are considered especially sensitive and therefore propitious candidates for inversion. We present a solution for countering model inversion attacks in tree-based models, by reducing the influence of sensitive features in these models. This is an avenue that has not yet been thoroughly investigated, with only very nascent previous attempts at using this as a countermeasure against attribute inference. Our work shows that, in many cases, it is possible to train a model in different ways, resulting in different influence levels of the various features, without necessarily harming the model's accuracy. We are able to utilize this fact to train models in a manner that reduces the model's reliance on the most sensitive features, while increasing the importance of less sensitive features. Our evaluation confirms that training models in this manner reduces the risk of inference for those features, as demonstrated through several black-box and white-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2017

The Unintended Consequences of Overfitting: Training Data Inference Attacks

Machine learning algorithms that are applied to sensitive data pose a di...
research
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
research
12/15/2022

Holistic risk assessment of inference attacks in machine learning

As machine learning expanding application, there are more and more unign...
research
09/02/2022

Are Attribute Inference Attacks Just Imputation?

Models can expose sensitive information about their training data. In an...
research
09/16/2022

Model Inversion Attacks against Graph Neural Networks

Many data mining tasks rely on graphs to model relational structures amo...
research
03/16/2023

Image Classifiers Leak Sensitive Attributes About Their Classes

Neural network-based image classifiers are powerful tools for computer v...
research
05/20/2022

Unintended memorisation of unique features in neural networks

Neural networks pose a privacy risk due to their propensity to memorise ...

Please sign up or login with your details

Forgot password? Click here to reset