Recovering from Privacy-Preserving Masking with Large Language Models

09/12/2023
by   Arpita Vats, et al.
0

Model adaptation is crucial to handle the discrepancy between proxy training data and actual users data received. To effectively perform adaptation, textual data of users is typically stored on servers or their local devices, where downstream natural language processing (NLP) models can be directly trained using such in-domain data. However, this might raise privacy and security concerns due to the extra risks of exposing user information to adversaries. Replacing identifying information in textual data with a generic marker has been recently explored. In this work, we leverage large language models (LLMs) to suggest substitutes of masked tokens and have their effectiveness evaluated on downstream language modeling tasks. Specifically, we propose multiple pre-trained and fine-tuned LLM-based approaches and perform empirical studies on various datasets for the comparison of these methods. Experimental results show that models trained on the obfuscation corpora are able to achieve comparable performance with the ones trained on the original data without privacy-preserving token masking.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2022

Privacy-Preserving Models for Legal Natural Language Processing

Pre-training large transformer models with in-domain data improves domai...
research
05/26/2022

Differentially Private Decoding in Large Language Models

Recent large-scale natural language processing (NLP) systems use a pre-t...
research
09/03/2020

Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models

Advances in language modeling have led to the development of deep attent...
research
02/14/2022

Threats to Pre-trained Language Models: Survey and Taxonomy

Pre-trained language models (PTLMs) have achieved great success and rema...
research
10/06/2022

Q-LSTM Language Model – Decentralized Quantum Multilingual Pre-Trained Language Model for Privacy Protection

Large-scale language models are trained on a massive amount of natural l...
research
06/01/2022

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

As more and more pre-trained language models adopt on-cloud deployment, ...
research
10/13/2022

Mitigating Unintended Memorization in Language Models via Alternating Teaching

Recent research has shown that language models have a tendency to memori...

Please sign up or login with your details

Forgot password? Click here to reset