Deduplicating Training Data Mitigates Privacy Risks in Language Models

02/14/2022
by   Nikhil Kandpal, et al.
0

Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence's count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated  1000 times more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

Defending against Reconstruction Attacks with Rényi Differential Privacy

Reconstruction attacks allow an adversary to regenerate data samples of ...
research
10/31/2022

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy

Studying data memorization in neural language models helps us understand...
research
02/15/2022

Quantifying Memorization Across Neural Language Models

Large language models (LMs) have been shown to memorize parts of their t...
research
04/21/2023

Emergent and Predictable Memorization in Large Language Models

Memorization, or the tendency of large language models (LLMs) to output ...
research
08/30/2023

Quantifying and Analyzing Entity-level Memorization in Large Language Models

Large language models (LLMs) have been proven capable of memorizing thei...
research
10/04/2022

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Pretrained Language Models (LMs) memorize a vast amount of knowledge dur...
research
01/06/2023

TrojanPuzzle: Covertly Poisoning Code-Suggestion Models

With tools like GitHub Copilot, automatic code suggestion is no longer a...

Please sign up or login with your details

Forgot password? Click here to reset