On the privacy-utility trade-off in differentially private hierarchical text classification

by   Dominik Wunderlich, et al.

Hierarchical models for text classification can leak sensitive or confidential training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models by perturbing the training optimizer. However, for hierarchical text classification a multiplicity of model architectures is available and it is unclear whether some architectures yield a better trade-off between remaining model accuracy and model leakage under differentially private training perturbation than others. We use a white-box membership inference attack to assess the information leakage of three widely used neural network architectures for hierarchical text classification under differential privacy. We show that relatively weak differential privacy guarantees already suffice to completely mitigate the membership inference attack, thus resulting only in a moderate decrease in utility. More specifically, for large datasets with long texts we observed transformer-based models to achieve an overall favorable privacy-utility trade-off, while for smaller datasets with shorter texts CNNs are preferable.



page 1

page 2

page 3

page 4


Assessing differentially private deep learning with Membership Inference

Releasing data in the form of trained neural networks with differential ...

Privacy Leakage in Text Classification: A Data Extraction Approach

Recent work has demonstrated the successful extraction of training data ...

Membership Inference Attack Susceptibility of Clinical Language Models

Deep Neural Network (DNN) models have been shown to have high empirical ...

ADePT: Auto-encoder based Differentially Private Text Transformation

Privacy is an important concern when building statistical models on data...

MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

Generative models are widely used for publishing synthetic datasets. Des...

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...

Privacy for Free: How does Dataset Condensation Help Privacy?

To prevent unintentional data leakage, research community has resorted t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.