Privacy Analysis in Language Models via Training Data Leakage Report

01/14/2021
by   Huseyin A. Inan, et al.
0

Recent advances in neural network based language models lead to successful deployments of such models, improving user experience in various applications. It has been demonstrated that strong performance of language models may come along with the ability to memorize rare training samples, which poses serious privacy threats in case the model training is conducted on confidential user content. This necessitates privacy monitoring techniques to minimize the chance of possible privacy breaches for the models deployed in practice. In this work, we introduce a methodology that investigates identifying the user content in the training data that could be leaked under a strong and realistic threat model. We propose two metrics to quantify user-level data leakage by measuring a model's ability to produce unique sentence fragments within training data. Our metrics further enable comparing different models trained on the same data in terms of privacy. We demonstrate our approach through extensive numerical studies on real-world datasets such as email and forum conversations. We further illustrate how the proposed metrics can be utilized to investigate the efficacy of mitigations like differentially private training or API hardening.

READ FULL TEXT
research
01/04/2022

Submix: Practical Private Prediction for Large-Scale Language Models

Recent data-extraction attacks have exposed that language models can mem...
research
08/30/2023

Quantifying and Analyzing Entity-level Memorization in Large Language Models

Large language models (LLMs) have been proven capable of memorizing thei...
research
04/26/2022

You Don't Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas

Social chatbots, also known as chit-chat chatbots, evolve rapidly with l...
research
04/11/2023

Multi-step Jailbreaking Privacy Attacks on ChatGPT

With the rapid progress of large language models (LLMs), many downstream...
research
05/30/2023

Quantifying Overfitting: Evaluating Neural Network Performance through Analysis of Null Space

Machine learning models that are overfitted/overtrained are more vulnera...
research
12/16/2022

Planting and Mitigating Memorized Content in Predictive-Text Language Models

Language models are widely deployed to provide automatic text completion...
research
06/09/2022

Privacy Leakage in Text Classification: A Data Extraction Approach

Recent work has demonstrated the successful extraction of training data ...

Please sign up or login with your details

Forgot password? Click here to reset