Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate Perturbations

10/20/2019
by   Oluwaseyi Feyisetan, et al.
0

Accurately learning from user data while providing quantifiable privacy guarantees provides an opportunity to build better ML models while maintaining user trust. This paper presents a formal approach to carrying out privacy preserving text perturbation using the notion of dx-privacy designed to achieve geo-indistinguishability in location data. Our approach applies carefully calibrated noise to vector representation of words in a high dimension space as defined by word embedding models. We present a privacy proof that satisfies dx-privacy where the privacy parameter epsilon provides guarantees with respect to a distance metric defined by the word embedding space. We demonstrate how epsilon can be selected by analyzing plausible deniability statistics backed up by large scale analysis on GloVe and fastText embeddings. We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types. Our results demonstrate practical utility (< 2 better privacy guarantees than baseline models.

READ FULL TEXT
research
10/22/2020

A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric

Balancing the privacy-utility tradeoff is a crucial requirement of many ...
research
10/20/2019

Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text

Guaranteeing a certain level of user privacy in an arbitrary piece of te...
research
12/10/2020

Research Challenges in Designing Differentially Private Text Generation Mechanisms

Accurately learning from user data while ensuring quantifiable privacy g...
research
07/24/2019

Privacy Parameter Variation Using RAPPOR on a Malware Dataset

Stricter data protection regulations and the poor application of privacy...
research
06/02/2023

Driving Context into Text-to-Text Privatization

Metric Differential Privacy enables text-to-text privatization by adding...
research
07/16/2021

TEM: High Utility Metric Differential Privacy on Text

Ensuring the privacy of users whose data are used to train Natural Langu...
research
08/22/2023

A novel analysis of utility in privacy pipelines, using Kronecker products and quantitative information flow

We combine Kronecker products, and quantitative information flow, to giv...

Please sign up or login with your details

Forgot password? Click here to reset