On the privacy-utility trade-off in differentially private hierarchical text classification

03/04/2021
by   Dominik Wunderlich, et al.
0

Hierarchical models for text classification can leak sensitive or confidential training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models by perturbing the training optimizer. However, for hierarchical text classification a multiplicity of model architectures is available and it is unclear whether some architectures yield a better trade-off between remaining model accuracy and model leakage under differentially private training perturbation than others. We use a white-box membership inference attack to assess the information leakage of three widely used neural network architectures for hierarchical text classification under differential privacy. We show that relatively weak differential privacy guarantees already suffice to completely mitigate the membership inference attack, thus resulting only in a moderate decrease in utility. More specifically, for large datasets with long texts we observed transformer-based models to achieve an overall favorable privacy-utility trade-off, while for smaller datasets with shorter texts CNNs are preferable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2019

Assessing differentially private deep learning with Membership Inference

Releasing data in the form of trained neural networks with differential ...
research
04/16/2022

Assessing Differentially Private Variational Autoencoders under Membership Inference

We present an approach to quantify and compare the privacy-accuracy trad...
research
06/09/2022

Privacy Leakage in Text Classification: A Data Extraction Approach

Recent work has demonstrated the successful extraction of training data ...
research
08/17/2022

An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models

Tabular data typically contains private and important information; thus,...
research
01/29/2021

ADePT: Auto-encoder based Differentially Private Text Transformation

Privacy is an important concern when building statistical models on data...
research
02/01/2023

Analyzing Leakage of Personally Identifiable Information in Language Models

Language Models (LMs) have been shown to leak information about training...
research
09/11/2020

MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

Generative models are widely used for publishing synthetic datasets. Des...

Please sign up or login with your details

Forgot password? Click here to reset