Canary Extraction in Natural Language Understanding Models

03/25/2022
by   Rahil Parikh, et al.
5

Natural Language Understanding (NLU) models can be trained on sensitive information such as phone numbers, zip-codes etc. Recent literature has focused on Model Inversion Attacks (ModIvA) that can extract training data from model parameters. In this work, we present a version of such an attack by extracting canaries inserted in NLU training data. In the attack, an adversary with open-box access to the model reconstructs the canaries contained in the model's training set. We evaluate our approach by performing text completion on canaries and demonstrate that by using the prefix (non-sensitive) tokens of the canary, we can generate the full canary. As an example, our attack is able to reconstruct a four digit code in the training dataset of the NLU model with a probability of 0.5 in its best configuration. As countermeasures, we identify several defense mechanisms that, when combined, effectively eliminate the risk of ModIvA in our experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2020

Extracting Training Data from Large Language Models

It has become common to publish large (billion parameter) language model...
research
01/23/2022

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...
research
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
research
02/13/2023

Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge

Previous work has shown that Large Language Models are susceptible to so...
research
09/02/2022

Are Attribute Inference Attacks Just Imputation?

Models can expose sensitive information about their training data. In an...
research
10/06/2021

Federated Distillation of Natural Language Understanding with Confident Sinkhorns

Enhancing the user experience is an essential task for application servi...
research
11/05/2021

Reconstructing Training Data from Diverse ML Models by Ensemble Inversion

Model Inversion (MI), in which an adversary abuses access to a trained M...

Please sign up or login with your details

Forgot password? Click here to reset