DeepAI
Log In Sign Up

On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

10/06/2021
by   Yunhao Yang, et al.
0

We study the privacy implications of deploying recurrent neural networks in machine learning. We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent. Using existing MIAs that target feed-forward neural networks, we empirically demonstrate that the attack accuracy wanes for data records used earlier in the training history. Alternatively, recurrent networks are specifically designed to better remember their past experience; hence, they are likely to be more vulnerable to MIAs than their feed-forward counterparts. We develop a pair of MIA layouts for two primary applications of recurrent networks, namely, deep reinforcement learning and sequence-to-sequence tasks. We use the first attack to provide empirical evidence that recurrent networks are indeed more vulnerable to MIAs than feed-forward networks with the same performance level. We use the second attack to showcase the differences between the effects of overtraining recurrent and feed-forward networks on the accuracy of their respective MIAs. Finally, we deploy a differential privacy mechanism to resolve the privacy vulnerability that the MIAs exploit. For both attack layouts, the privacy mechanism degrades the attack accuracy from above 80 membership uniformly at random, while trading off less than 10

READ FULL TEXT

page 1

page 2

page 3

page 4

11/21/2019

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...
06/02/2019

Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

A membership inference attack (MIA) against a machine learning model ena...
05/25/2018

When Recurrent Models Don't Need To Be Recurrent

We prove stable recurrent neural networks are well approximated by feed-...
09/17/2022

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

In this paper, we advocate for two stages in a neural network's decision...
11/14/2018

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...
10/17/2022

A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information

Unlike traditional static deep neural networks (DNNs), dynamic neural ne...