Sparsity in neural networks can improve their privacy

04/20/2023
by   Antoine Gonon, et al.
0

This article measures how sparsity can make neural networks more robust to membership inference attacks. The obtained empirical results show that sparsity improves the privacy of the network, while preserving comparable performances on the task at hand. This empirical study completes and extends existing literature.

READ FULL TEXT
research
04/11/2023

Sparsity in neural networks can increase their privacy

This article measures how sparsity can make neural networks more robust ...
research
02/07/2022

Membership Inference Attacks and Defenses in Neural Network Pruning

Neural network pruning has been an essential technique to reduce the com...
research
03/08/2022

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks

The wide adoption and application of Masked language models (MLMs) on se...
research
10/06/2021

On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

We study the privacy implications of deploying recurrent neural networks...
research
06/14/2023

A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy

Graph Neural Networks (GNNs) have achieved great success in modeling gra...
research
02/24/2020

Group Membership Verification with Privacy: Sparse or Dense?

Group membership verification checks if a biometric trait corresponds to...
research
03/01/2021

Wide Network Learning with Differential Privacy

Despite intense interest and considerable effort, the current generation...

Please sign up or login with your details

Forgot password? Click here to reset