Systematic Evaluation of Privacy Risks of Machine Learning Models

03/24/2020
by   Liwei Song, et al.
3

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model. In this paper, we show that prior work on membership inference attacks may severely underestimate the privacy risks by relying solely on training custom neural network classifiers to perform attacks and focusing only on the aggregate results over data samples, such as the attack accuracy. To overcome these limitations, we first propose to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks and proposing a new inference attack method based on a modification of prediction entropy. We also propose benchmarks for defense mechanisms by accounting for adaptive adversaries with knowledge of the defense and also accounting for the trade-off between model accuracy and privacy risks. Using our benchmark attacks, we demonstrate that existing defense approaches are not as effective as previously reported. Next, we introduce a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score. Our privacy risk score metric measures an individual sample's likelihood of being a training member, which allows an adversary to perform membership inference attacks with high confidence. We experimentally validate the effectiveness of the privacy risk score metric and demonstrate that the distribution of the privacy risk score across individual samples is heterogeneous. Finally, we perform an in-depth investigation for understanding why certain samples have high privacy risk scores, including correlations with model sensitivity, generalization error, and feature embeddings. Our work emphasizes the importance of a systematic and rigorous evaluation of privacy risks of machine learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2020

Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

Machine learning models have been shown to leak information violating th...
research
02/07/2022

Membership Inference Attacks and Defenses in Neural Network Pruning

Neural network pruning has been an essential technique to reduce the com...
research
08/17/2022

On the Privacy Effect of Data Enhancement via the Lens of Memorization

Machine learning poses severe privacy concerns as it is shown that the l...
research
12/04/2021

SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning

Data used to train machine learning (ML) models can be sensitive. Member...
research
06/11/2022

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Membership inference attacks (MIAs) against machine learning models can ...
research
05/21/2020

Revisiting Membership Inference Under Realistic Assumptions

Membership inference attacks on models trained using machine learning ha...
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...

Please sign up or login with your details

Forgot password? Click here to reset