Extracting Training Data from Large Language Models

12/14/2020
by   Nicholas Carlini, et al.
0

It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data. We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

READ FULL TEXT
research
03/25/2022

Canary Extraction in Natural Language Understanding Models

Natural Language Understanding (NLU) models can be trained on sensitive ...
research
02/13/2023

Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge

Previous work has shown that Large Language Models are susceptible to so...
research
05/22/2022

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Despite their wide adoption, the underlying training and memorization dy...
research
05/19/2023

Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning

Large Language Models (LLMs) are known to memorize significant portions ...
research
06/17/2022

Evolution through Large Models

This paper pursues the insight that large language models (LLMs) trained...
research
07/13/2023

Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success

The generations of large language models are commonly controlled through...
research
12/24/2021

Counterfactual Memorization in Neural Language Models

Modern neural language models widely used in tasks across NLP risk memor...

Please sign up or login with your details

Forgot password? Click here to reset