On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

by   Parham Gohari, et al.

We study the privacy implications of deploying recurrent neural networks in machine learning. We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent. Using existing MIAs that target feed-forward neural networks, we empirically demonstrate that the attack accuracy wanes for data records used earlier in the training history. Alternatively, recurrent networks are specifically designed to better remember their past experience; hence, they are likely to be more vulnerable to MIAs than their feed-forward counterparts. We develop a pair of MIA layouts for two primary applications of recurrent networks, namely, deep reinforcement learning and sequence-to-sequence tasks. We use the first attack to provide empirical evidence that recurrent networks are indeed more vulnerable to MIAs than feed-forward networks with the same performance level. We use the second attack to showcase the differences between the effects of overtraining recurrent and feed-forward networks on the accuracy of their respective MIAs. Finally, we deploy a differential privacy mechanism to resolve the privacy vulnerability that the MIAs exploit. For both attack layouts, the privacy mechanism degrades the attack accuracy from above 80 membership uniformly at random, while trading off less than 10


page 1

page 2

page 3

page 4


Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...

Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

A membership inference attack (MIA) against a machine learning model ena...

When Recurrent Models Don't Need To Be Recurrent

We prove stable recurrent neural networks are well approximated by feed-...

Sparsity in neural networks can improve their privacy

This article measures how sparsity can make neural networks more robust ...

Sparsity in neural networks can increase their privacy

This article measures how sparsity can make neural networks more robust ...

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...

Please sign up or login with your details

Forgot password? Click here to reset