KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records

12/31/2020
by   Yuta Nakamura, et al.
0

Nowadays, mainstream natural language pro-cessing (NLP) is empowered by pre-trained language models. In the biomedical domain, only models pre-trained with anonymized data have been published. This policy is acceptable, but there are two questions: Can the privacy policy of language models be different from that of data? What happens if private language models are accidentally made public? We empirically evaluated the privacy risk of language models, using several BERT models pre-trained with MIMIC-III corpus in different data anonymity and corpus sizes. We simulated model inversion attacks to obtain the clinical information of target individuals, whose full names are already known to attackers. The BERT models were probably low-risk because the Top-100 accuracy of each attack was far below expected by chance. Moreover, most privacy leakage situations have several common primary factors; therefore, we formalized various privacy leakage scenarios under a universal novel framework named Knowledge, Anonymization, Resource, and Target (KART) framework. The KART framework helps parameterize complex privacy leakage scenarios and simplifies the comprehensive evaluation. Since the concept of the KART framework is domain agnostic, it can contribute to the establishment of privacy guidelines of language models beyond the biomedical domain.

READ FULL TEXT
research
04/16/2021

Membership Inference Attack Susceptibility of Clinical Language Models

Deep Neural Network (DNN) models have been shown to have high empirical ...
research
04/26/2022

You Don't Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas

Social chatbots, also known as chit-chat chatbots, evolve rapidly with l...
research
07/04/2023

ProPILE: Probing Privacy Leakage in Large Language Models

The rapid advancement and widespread use of large language models (LLMs)...
research
02/14/2023

BLIAM: Literature-based Data Synthesis for Synergistic Drug Combination Prediction

Language models pre-trained on scientific literature corpora have substa...
research
10/31/2022

When Language Model Meets Private Library

With the rapid development of pre-training techniques, a number of langu...
research
04/07/2023

Does Prompt-Tuning Language Model Ensure Privacy?

Prompt-tuning has received attention as an efficient tuning method in th...
research
05/22/2023

Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage

The advancement of large language models (LLMs) brings notable improveme...

Please sign up or login with your details

Forgot password? Click here to reset