One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks

12/15/2021
by   Manuel Senge, et al.
0

Preserving privacy in training modern NLP models comes at a cost. We know that stricter privacy guarantees in differentially-private stochastic gradient descent (DP-SGD) generally degrade model performance. However, previous research on the efficiency of DP-SGD in NLP is inconclusive or even counter-intuitive. In this short paper, we provide a thorough analysis of different privacy preserving strategies on seven downstream datasets in five different `typical' NLP tasks with varying complexity using modern neural models. We show that unlike standard non-private approaches to solving NLP tasks, where bigger is usually better, privacy-preserving strategies do not exhibit a winning pattern, and each task and privacy regime requires a special treatment to achieve adequate performance.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

12/29/2021

DP-FP: Differentially Private Forward Propagation for Large Models

When applied to large-scale learning problems, the conventional wisdom o...
07/14/2021

An Efficient DP-SGD Mechanism for Large Scale NLP Models

Recent advances in deep learning have drastically improved performance o...
10/12/2021

Large Language Models Can Be Strong Differentially Private Learners

Differentially Private (DP) learning has seen limited success for buildi...
10/13/2020

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings

Machine learning models in health care are often deployed in settings wh...
02/10/2022

Backpropagation Clipping for Deep Learning with Differential Privacy

We present backpropagation clipping, a novel variant of differentially p...
10/01/2020

Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling

This paper formulates a new task of extracting privacy parameters from a...
01/29/2021

ADePT: Auto-encoder based Differentially Private Text Transformation

Privacy is an important concern when building statistical models on data...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.