Reducing Risk of Model Inversion Using Privacy-Guided Training

06/29/2020
by   Abigail Goldsteen, et al.
0

Machine learning models often pose a threat to the privacy of individuals whose data is part of the training set. Several recent attacks have been able to infer sensitive information from trained models, including model inversion or attribute inference attacks. These attacks are able to reveal the values of certain sensitive features of individuals who participated in training the model. It has also been shown that several factors can contribute to an increased risk of model inversion, including feature influence. We observe that not all features necessarily share the same level of privacy or sensitivity. In many cases, certain features used to train a model are considered especially sensitive and therefore propitious candidates for inversion. We present a solution for countering model inversion attacks in tree-based models, by reducing the influence of sensitive features in these models. This is an avenue that has not yet been thoroughly investigated, with only very nascent previous attempts at using this as a countermeasure against attribute inference. Our work shows that, in many cases, it is possible to train a model in different ways, resulting in different influence levels of the various features, without necessarily harming the model's accuracy. We are able to utilize this fact to train models in a manner that reduces the model's reliance on the most sensitive features, while increasing the importance of less sensitive features. Our evaluation confirms that training models in this manner reduces the risk of inference for those features, as demonstrated through several black-box and white-box attacks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/05/2017

The Unintended Consequences of Overfitting: Training Data Inference Attacks

Machine learning algorithms that are applied to sensitive data pose a di...
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
06/29/2019

Privacy Risks of Explaining Machine Learning Models

Can we trust black-box machine learning with its decisions? Can we trust...
03/13/2022

Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It

Transfer learning is an important approach that produces pre-trained tea...
05/06/2021

Membership Inference Attacks on Deep Regression Models for Neuroimaging

Ensuring the privacy of research participants is vital, even more so in ...
04/27/2021

Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

Machine learning models' goal is to make correct predictions for specifi...
05/20/2022

Unintended memorisation of unique features in neural networks

Neural networks pose a privacy risk due to their propensity to memorise ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.