Quantifying Memorization Across Neural Language Models

02/15/2022
by   Nicholas Carlini, et al.
1

Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim. This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others). We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase (1) the capacity of a model, (2) the number of times an example has been duplicated, and (3) the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations.

READ FULL TEXT

page 8

page 16

page 17

page 18

page 19

page 20

research
02/14/2022

Deduplicating Training Data Mitigates Privacy Risks in Language Models

Past work has shown that large language models are susceptible to privac...
research
05/21/2022

Scaling Laws and Interpretability of Learning from Repeated Data

Recent large language models have been trained on vast datasets, but als...
research
03/15/2022

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...
research
04/22/2023

Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens

Recent psycholinguistic studies have drawn conflicting conclusions about...
research
12/24/2021

Counterfactual Memorization in Neural Language Models

Modern neural language models widely used in tasks across NLP risk memor...
research
09/19/2023

Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model Evaluation

Data contamination in model evaluation is getting increasingly prevalent...
research
05/02/2023

Mitigating Approximate Memorization in Language Models via Dissimilarity Learned Policy

Large Language models (LLMs) are trained on large amounts of data, which...

Please sign up or login with your details

Forgot password? Click here to reset