Membership Inference Attack Susceptibility of Clinical Language Models

04/16/2021
by   Abhyuday Jagannatha, et al.
0

Deep Neural Network (DNN) models have been shown to have high empirical privacy leakages. Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks. In this work, we investigate the risks of training-data leakage through white-box or black-box access to CLMs. We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2. We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7 smaller models have lower empirical privacy leakages than larger ones, and masked LMs have lower leakages than auto-regressive LMs. We further show that differentially private CLMs can have improved model utility on clinical domain while ensuring low empirical privacy leakage. Lastly, we also study the effects of group-level membership inference and disease rarity on CLM privacy leakages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2022

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks

The wide adoption and application of Masked language models (MLMs) on se...
research
12/31/2020

KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records

Nowadays, mainstream natural language pro-cessing (NLP) is empowered by ...
research
05/22/2023

Watermarking Text Data on Large Language Models for Dataset Copyright Protection

Large Language Models (LLMs), such as BERT and GPT-based models like Cha...
research
02/01/2023

Analyzing Leakage of Personally Identifiable Information in Language Models

Language Models (LMs) have been shown to leak information about training...
research
05/27/2022

Benign Overparameterization in Membership Inference with Early Stopping

Does a neural network's privacy have to be at odds with its accuracy? In...
research
08/17/2022

An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models

Tabular data typically contains private and important information; thus,...
research
12/03/2018

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

Deep neural networks are susceptible to various inference attacks as the...

Please sign up or login with your details

Forgot password? Click here to reset