On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

10/06/2021
by   Parham Gohari, et al.
0

We study the privacy implications of deploying recurrent neural networks in machine learning. We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent. Using existing MIAs that target feed-forward neural networks, we empirically demonstrate that the attack accuracy wanes for data records used earlier in the training history. Alternatively, recurrent networks are specifically designed to better remember their past experience; hence, they are likely to be more vulnerable to MIAs than their feed-forward counterparts. We develop a pair of MIA layouts for two primary applications of recurrent networks, namely, deep reinforcement learning and sequence-to-sequence tasks. We use the first attack to provide empirical evidence that recurrent networks are indeed more vulnerable to MIAs than feed-forward networks with the same performance level. We use the second attack to showcase the differences between the effects of overtraining recurrent and feed-forward networks on the accuracy of their respective MIAs. Finally, we deploy a differential privacy mechanism to resolve the privacy vulnerability that the MIAs exploit. For both attack layouts, the privacy mechanism degrades the attack accuracy from above 80 membership uniformly at random, while trading off less than 10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...
research
06/02/2019

Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

A membership inference attack (MIA) against a machine learning model ena...
research
05/25/2018

When Recurrent Models Don't Need To Be Recurrent

We prove stable recurrent neural networks are well approximated by feed-...
research
04/20/2023

Sparsity in neural networks can improve their privacy

This article measures how sparsity can make neural networks more robust ...
research
04/11/2023

Sparsity in neural networks can increase their privacy

This article measures how sparsity can make neural networks more robust ...
research
11/14/2018

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...

Please sign up or login with your details

Forgot password? Click here to reset