DeepAI
Log In Sign Up

KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records

12/31/2020
by   Yuta Nakamura, et al.
0

Nowadays, mainstream natural language pro-cessing (NLP) is empowered by pre-trained language models. In the biomedical domain, only models pre-trained with anonymized data have been published. This policy is acceptable, but there are two questions: Can the privacy policy of language models be different from that of data? What happens if private language models are accidentally made public? We empirically evaluated the privacy risk of language models, using several BERT models pre-trained with MIMIC-III corpus in different data anonymity and corpus sizes. We simulated model inversion attacks to obtain the clinical information of target individuals, whose full names are already known to attackers. The BERT models were probably low-risk because the Top-100 accuracy of each attack was far below expected by chance. Moreover, most privacy leakage situations have several common primary factors; therefore, we formalized various privacy leakage scenarios under a universal novel framework named Knowledge, Anonymization, Resource, and Target (KART) framework. The KART framework helps parameterize complex privacy leakage scenarios and simplifies the comprehensive evaluation. Since the concept of the KART framework is domain agnostic, it can contribute to the establishment of privacy guidelines of language models beyond the biomedical domain.

READ FULL TEXT
04/16/2021

Membership Inference Attack Susceptibility of Clinical Language Models

Deep Neural Network (DNN) models have been shown to have high empirical ...
04/26/2022

You Don't Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas

Social chatbots, also known as chit-chat chatbots, evolve rapidly with l...
05/25/2022

Are Large Pre-Trained Language Models Leaking Your Personal Information?

Large Pre-Trained Language Models (PLMs) have facilitated and dominated ...
03/06/2020

Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT

Massive digital data processing provides a wide range of opportunities a...
10/31/2022

When Language Model Meets Private Library

With the rapid development of pre-training techniques, a number of langu...
04/12/2022

Mining Logical Event Schemas From Pre-Trained Language Models

We present NESL (the Neuro-Episodic Schema Learner), an event schema lea...
08/15/2022

Targeted Honeyword Generation with Language Models

Honeywords are fictitious passwords inserted into databases in order to ...