Selective Differential Privacy for Language Modeling

08/30/2021
by   Weiyan Shi, et al.
0

With the increasing adoption of language models in applications involving sensitive data, it has become crucial to protect these models from leaking private information. Previous work has attempted to tackle this challenge by training RNN-based language models with differential privacy guarantees. However, applying classical differential privacy to language models leads to poor model performance as the underlying privacy notion is over-pessimistic and provides undifferentiated protection for all tokens of the data. Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential privacy, to provide rigorous privacy guarantees on the sensitive portion of the data to improve model utility. To realize such a new notion, we develop a corresponding privacy mechanism, Selective-DPSGD, for RNN-based language models. Besides language modeling, we also apply the method to a more concrete application – dialog systems. Experiments on both language modeling and dialog system building show that the proposed privacy-preserving mechanism achieves better utilities while remaining safe under various privacy attacks compared to the baselines. The data, code and models are available at https://github.com/wyshi/lm_privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2022

Just Fine-tune Twice: Selective Differential Privacy for Large Language Models

With the increasing adoption of NLP models in real-world products, it be...
research
10/04/2022

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Pretrained Language Models (LMs) memorize a vast amount of knowledge dur...
research
02/15/2022

Defending against Reconstruction Attacks with Rényi Differential Privacy

Reconstruction attacks allow an adversary to regenerate data samples of ...
research
11/01/2022

User-Entity Differential Privacy in Learning Natural Language Models

In this paper, we introduce a novel concept of user-entity differential ...
research
12/16/2022

Planting and Mitigating Memorized Content in Predictive-Text Language Models

Language models are widely deployed to provide automatic text completion...
research
12/20/2017

Differentially Private Distributed Learning for Language Modeling Tasks

One of the big challenges in machine learning applications is that train...
research
05/04/2022

Provably Confidential Language Modelling

Large language models are shown to memorize privacy information such as ...

Please sign up or login with your details

Forgot password? Click here to reset