Membership Inference Attacks and Defenses in Neural Network Pruning

02/07/2022
by   Xiaoyong Yuan, et al.
0

Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model's behavior for members and non-members. Meanwhile, the influence of divergence even varies among different classes in a fine-grained manner. Enlighten by such divergence, we proposed a self-attention membership inference attack against the pruned neural networks. Extensive experiments are conducted to rigorously evaluate the privacy impacts of different pruning approaches, sparsity levels, and adversary knowledge. The proposed attack shows the higher attack performance on the pruned models when compared with eight existing membership inference attacks. In addition, we propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence based on KL-divergence distance, whose effectiveness has been experimentally demonstrated to effectively mitigate the privacy risks while maintaining the sparsity and accuracy of the pruned models.

READ FULL TEXT

page 4

page 5

page 12

page 32

research
03/24/2020

Systematic Evaluation of Privacy Risks of Machine Learning Models

Machine learning models are prone to memorizing sensitive data, making t...
research
07/04/2023

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Machine learning (ML) models are vulnerable to membership inference atta...
research
04/20/2023

Sparsity in neural networks can improve their privacy

This article measures how sparsity can make neural networks more robust ...
research
04/11/2023

Sparsity in neural networks can increase their privacy

This article measures how sparsity can make neural networks more robust ...
research
08/04/2023

Pruning a neural network using Bayesian inference

Neural network pruning is a highly effective technique aimed at reducing...
research
05/21/2020

Revisiting Membership Inference Under Realistic Assumptions

Membership inference attacks on models trained using machine learning ha...
research
08/28/2020

MCMIA: Model Compression Against Membership Inference Attack in Deep Neural Networks

Deep learning or deep neural networks (DNNs) have nowadays enabled high ...

Please sign up or login with your details

Forgot password? Click here to reset