DeepAI AI Chat
Log In Sign Up

Defending against Reconstruction Attacks with Rényi Differential Privacy

by   Pierre Stock, et al.

Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model. It has been recently shown that simple heuristics can reconstruct data samples from language models, making this threat scenario an important aspect of model release. Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget (epsilon > 8) which does not translate to meaningful guarantees. In this paper we show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature. In particular, we show that larger privacy budgets do not protect against membership inference, but can still protect extraction of rare secrets. We show experimentally that our guarantees hold against various language models, including GPT-2 finetuned on Wikitext-103.


page 1

page 2

page 3

page 4


Bounding Training Data Reconstruction in Private (Deep) Learning

Differential privacy is widely accepted as the de facto method for preve...

Deduplicating Training Data Mitigates Privacy Risks in Language Models

Past work has shown that large language models are susceptible to privac...

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...

Analyzing Leakage of Personally Identifiable Information in Language Models

Language Models (LMs) have been shown to leak information about training...

Database Reconstruction Is Not So Easy and Is Different from Reidentification

In recent years, it has been claimed that releasing accurate statistical...

Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning

Privacy attacks on machine learning models aim to identify the data that...

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Pretrained Language Models (LMs) memorize a vast amount of knowledge dur...