On the privacy-utility trade-off in differentially private hierarchical text classification

03/04/2021
by   Dominik Wunderlich, et al.
0

Hierarchical models for text classification can leak sensitive or confidential training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models by perturbing the training optimizer. However, for hierarchical text classification a multiplicity of model architectures is available and it is unclear whether some architectures yield a better trade-off between remaining model accuracy and model leakage under differentially private training perturbation than others. We use a white-box membership inference attack to assess the information leakage of three widely used neural network architectures for hierarchical text classification under differential privacy. We show that relatively weak differential privacy guarantees already suffice to completely mitigate the membership inference attack, thus resulting only in a moderate decrease in utility. More specifically, for large datasets with long texts we observed transformer-based models to achieve an overall favorable privacy-utility trade-off, while for smaller datasets with shorter texts CNNs are preferable.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

12/24/2019

Assessing differentially private deep learning with Membership Inference

Releasing data in the form of trained neural networks with differential ...
06/09/2022

Privacy Leakage in Text Classification: A Data Extraction Approach

Recent work has demonstrated the successful extraction of training data ...
04/16/2021

Membership Inference Attack Susceptibility of Clinical Language Models

Deep Neural Network (DNN) models have been shown to have high empirical ...
01/29/2021

ADePT: Auto-encoder based Differentially Private Text Transformation

Privacy is an important concern when building statistical models on data...
09/11/2020

MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

Generative models are widely used for publishing synthetic datasets. Des...
01/13/2022

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...
06/01/2022

Privacy for Free: How does Dataset Condensation Help Privacy?

To prevent unintentional data leakage, research community has resorted t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.