-
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Neural language models are known to have a high capacity for memorizatio...
read it
-
Differentially Private Language Models Benefit from Public Pre-training
Language modeling is a keystone task in natural language processing. Whe...
read it
-
Extracting Training Data from Large Language Models
It has become common to publish large (billion parameter) language model...
read it
-
KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records
Nowadays, mainstream natural language pro-cessing (NLP) is empowered by ...
read it
-
On the privacy-utility trade-off in differentially private hierarchical text classification
Hierarchical models for text classification can leak sensitive or confid...
read it
-
Differentially Private Distributed Learning for Language Modeling Tasks
One of the big challenges in machine learning applications is that train...
read it
-
TAG: Transformer Attack from Gradient
Although federated learning has increasingly gained attention in terms o...
read it
Privacy Analysis in Language Models via Training Data Leakage Report
Recent advances in neural network based language models lead to successful deployments of such models, improving user experience in various applications. It has been demonstrated that strong performance of language models may come along with the ability to memorize rare training samples, which poses serious privacy threats in case the model training is conducted on confidential user content. This necessitates privacy monitoring techniques to minimize the chance of possible privacy breaches for the models deployed in practice. In this work, we introduce a methodology that investigates identifying the user content in the training data that could be leaked under a strong and realistic threat model. We propose two metrics to quantify user-level data leakage by measuring a model's ability to produce unique sentence fragments within training data. Our metrics further enable comparing different models trained on the same data in terms of privacy. We demonstrate our approach through extensive numerical studies on real-world datasets such as email and forum conversations. We further illustrate how the proposed metrics can be utilized to investigate the efficacy of mitigations like differentially private training or API hardening.
READ FULL TEXT
Comments
There are no comments yet.