What Does it Mean for a Language Model to Preserve Privacy?

02/11/2022
by   Hannah Brown, et al.
7

Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2022

User-Entity Differential Privacy in Learning Natural Language Models

In this paper, we introduce a novel concept of user-entity differential ...
research
10/06/2022

Q-LSTM Language Model – Decentralized Quantum Multilingual Pre-Trained Language Model for Privacy Protection

Large-scale language models are trained on a massive amount of natural l...
research
05/04/2022

Provably Confidential Language Modelling

Large language models are shown to memorize privacy information such as ...
research
12/17/2019

Analyzing Privacy Loss in Updates of Natural Language Models

To continuously improve quality and reflect changes in data, machine lea...
research
05/22/2023

Watermarking Text Data on Large Language Models for Dataset Copyright Protection

Large Language Models (LLMs), such as BERT and GPT-based models like Cha...
research
12/05/2021

Interpretable Privacy Preservation of Text Representations Using Vector Steganography

Contextual word representations generated by language models (LMs) learn...
research
09/06/2023

Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy Protection

Numerous companies have started offering services based on large languag...

Please sign up or login with your details

Forgot password? Click here to reset