TEM: High Utility Metric Differential Privacy on Text

07/16/2021 ∙ by Ricardo Silva Carvalho, et al. ∙ 0

Ensuring the privacy of users whose data are used to train Natural Language Processing (NLP) models is necessary to build and maintain customer trust. Differential Privacy (DP) has emerged as the most successful method to protect the privacy of individuals. However, applying DP to the NLP domain comes with unique challenges. The most successful previous methods use a generalization of DP for metric spaces, and apply the privatization by adding noise to inputs in the metric space of word embeddings. However, these methods assume that one specific distance measure is being used, ignore the density of the space around the input, and assume the embeddings used have been trained on non-sensitive data. In this work we propose Truncated Exponential Mechanism (TEM), a general method that allows the privatization of words using any distance metric, on embeddings that can be trained on sensitive data. Our method makes use of the exponential mechanism to turn the privatization step into a selection problem. This allows the noise applied to be calibrated to the density of the embedding space around the input, and makes domain adaptation possible for the embeddings. In our experiments, we demonstrate that our method significantly outperforms the state-of-the-art in terms of utility for the same level of privacy, while providing more flexibility in the metric selection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nowadays, text data are being used as input for a wide variety of machine learning tasks, from next word prediction in mobile keyboards

[hard2018federated], to critical tasks like predicting patient health conditions from clinical records [yao2019clinical]. Researchers have demonstrated that simple exploratory analysis tasks [violate2004data] or the use of models trained on sensitive data [shokri2017membership]

may breach the privacy of the individuals involved. Even though there has been research focused on specific privacy-preserving tasks with textual data, such as language models

[mcmahan2017learning, bo2019er, vu2019dpugc], to the best of our knowledge only a more recent line of work [natasha2018obfuscation, poincareMDP2019, madlib2020] has been focusing on providing quantifiable privacy guarantees over the text itself.

In this work we aim to provide privacy guarantees to textual data using the formal notion of differential privacy (DP) [dwork2006calibrating], one of the most widely adopted privacy frameworks in industry and academia. Specifically, we apply a generalization called metric-differential privacy [chatzikokolakis2013metricdp], which allows analysts to tailor solutions over general distance metrics. Previous work [natasha2018obfuscation, poincareMDP2019, madlib2020]

in this setting considered each input word by its vector representation and added noise to provide privacy guarantees. Additionally, as a noisy vector is unlikely to exactly represent a valid word, these methods returned a nearest neighbor approximation after querying the representation space. However, these works consider the representation space as non-sensitive, as they do not account for privacy loss in the nearest neighbor search. Moreover, the noise added to a vector does not take into account the density of the region that the vector lies in, which can potentially reduce the utility of a DP algorithm.

Our main contribution is the design of a new mechanism which we call the Truncated Exponential Mechanism

(TEM,) that satisfies metric-DP over textual data, posing the task as a selection problem. Instead of perturbing a representation vector, our method selects an output from a set of possible candidates where words closer to the input word in the metric space have higher probability of being selected. TEM works by adapting the probabilities of words being selected to the regions of a given input word, adjusting the noise injected for better utility, and allows the application of any formal distance function as metric. The mechanism includes a formal construction with a truncation step to initially select from high utility words with high probability, providing computationally efficient word selection with a tunable error parameter. Our experiments show that TEM obtains higher utility when compared to the state-of-the-art, for the same level of privacy.

2 Related Work

natasha2018obfuscation worked on text data via the “bag of words” representation of documents, and applied the Earth Mover’s metric to obtain privatized bags, performing individual word privatization in the context of metric differential privacy. Following this context, the Madlib111We refer to this algorithm with the name used in [madlibBlog] mechanism [madlib2020] adds noise to embedding vectors of words, working on in the Euclidean space and adding Laplacian noise to the embedding vectors. After introducing noise, the mechanism outputs the word that is closest to the noisy vector in the embedding space. The algorithm presented in [poincareMDP2019] is a follow-up to [madlib2020] although it appeared later. This mechanism works in a hierarchical embedding space, where the embedding vector of an input word is perturbed with noise from a hyperbolic distribution.

poincareMDP2019 compare the hyperbolic mechanism to Madlib [madlib2020]. However, since the two algorithms use different metric functions, the evaluation of privacy via only matching the parameter of differential privacy can be improved. In this sense, poincareMDP2019 compares the privacy of the two mechanisms, looking at the probability of not changing a word after noise injection, i.e. the probability that the mechanism returns the exact same word used as input. Even though this notion can be intuitively seen as a level of indistinguishability, it cannot guarantee a fair comparison between mechanisms. The issue of comparing metric-DP mechanisms with different metric functions thus remains an open problem. In this work, we only compare mechanisms using the same metric function (Euclidean distance) to ensure a fair comparison.

3 Preliminaries

Consider a user giving as input a word from a discrete fixed domain . For any pair of inputs and , we assume a distance function , in a given space of representation of these words. More specifically, we consider a word embedding model will be used to represent words, and the distance function can be a valid metric applicable to the embedding vectors.

Our goal is to select a word from , based on a given input, such that the privacy of a user, with respect to this word choice, is preserved. From an attacker’s perspective, the output of an algorithm working over input or will become more similar as these inputs become closer with respect to . Intuitively, words that are distant in metric space will be more easily distinguishable, compared to words that are close.

With that in mind, we will work on Metric-Differential Privacy [chatzikokolakis2013metricdp], a privacy standard defined for randomized algorithms with input from a domain that are equipped with a distance metric satisfying the formal axioms of a metric. In this context, the privacy guarantees given by metric-DP depend not only on the privacy parameter , but also the distance metric used.

Definition 3.1.

(Metric Differential privacy [chatzikokolakis2013metricdp]). Given a distance metric , a randomized mechanism is -differentially private if for any and all outputs we have:

(1)

For the Euclidean distance metric, as discussed on Section 2, the current state-of-the-art is the Madlib mechanism, which adds Laplacian noise to a given vector in order to obtain a private output.

Input: Finite domain , input word and privacy parameter . 
Output: Privatized element.

1:  Compute embedding
2:  Perturb embedding to obtain with noise density
3:  Return perturbed word
Algorithm 1 - Madlib: Word Privatization Mechanism for Metric Differential Privacy

For a Euclidean metric , Madlib provides metric differential privacy.

Theorem 1.

For a Euclidean metric , Algorithm 1 is -differentially private.

Next we describe our algorithm that satisfies metric-DP, giving formal proof of its privacy guarantees.

4 Metric Truncated Exponential Mechanism

At its core, our algorithm uses the Exponential Mechanism (EM) [mt2007expmech], which is often used for selection in the context of differential privacy[dwork2006calibrating].

Input: Finite domain , input word , truncation threshold , metric , and privacy parameter . 
Output: Privatized element.

1:  Given input , obtain the set such that each word satisfies
2:  Set the score of each as
3:  Create a element with score
4:  For each word , add Gumbel noise with mean 0 and scale to score
5:  Select as the element with maximum noisy score from
6:  if then return random sample of
7:  else return
Algorithm 2 - TEM: Metric Truncated Exponential Mechanism

Algorithm 2, denoted as TEM, is using a variant of the exponential mechanism with Gumbel noise [durfee2019practical], and more specifically adapted to metric-DP for any given distance metric. TEM starts by selecting from words closer to the input by a distance of less or equal than a threshold , also including a element to account for the words outside the distance. Privacy proof is deferred to Appendix A.1.

Theorem 2.

For any formal distance metric , Algorithm 2 is -differentially private.

To also optimize for utility, next we give a theoretical proposition for defining in order to select elements within distance from the input with high probability. We defer proof to Appendix A.2.

Theorem 3.

For and input , TEM outputs elements with distance less or equal than from with probability at least for .

The result above gives a guarantee of outputting words that are close to the input with high probability for a given distance threshold. Thus, it is a theoretical way to choose without looking at the data, i.e. without incurring privacy loss, while getting utility guarantees with high probability.

Below we highlight some of the advantages of TEM, specifically compared to the state-of-the-art.

4.1 Detailed comparison with previous work

TEM works for any given distance function that satisfies the axioms of a metric. In this sense, it has advantages for future use, when compared to previous mechanisms that used fixed metrics, such as Euclidean [madlib2020] and Hyperbolic [poincareMDP2019], where changing the metric would need additional privacy analysis.

Previous work [natasha2018obfuscation, madlib2020, poincareMDP2019] considered the text privacy preservation problem mainly as a task of releasing a word embedding vector after perturbing with some noise. This means they add noise to each of the dimensions of the vector they aim to release, treating every word embedding vector the same way. In practice, this leads to adding the same amount of noise for any word in the embedding space, regardless of whether the word lies in a dense or sparse region. In contrast, TEM preserves privacy by posing the task as a selection problem, giving words closer to the input word a higher probability of being selected. Therefore, TEM has a more dynamic behaviour, adjusting the noise to the domain of selection. In practice, for a given , TEM will add less noise to regions with high density, and more noise to regions with low density, therefore offering better utility in high-density areas.

Moreover, previous work assumed the word embeddings used were trained on a separate dataset, distinct from the data being privatized. TEM does not have that requirement, providing the potential for further utility gains through the use of domain adaptation, i.e. fine-tuning public pre-trained embeddings on the target sensitive data [plank2013embedding, jaech2016domain].

In terms of computational cost, the bottleneck of previous work resides in the nearest neighbor search of the noisy embedding vector obtained from the input. TEM starts by getting the elements within distance of the input, but instead of querying the nearest neighbor, it queries for neighbors within a given range. In this sense, both methods can rely on fast approximate nearest neighbors implementations that support both querying nearest and by range, such as [faissJDH17]. Nonetheless, TEM is the only mechanism that, for a fixed domain, is able to pre-process and store the search results for a given . After this step, the range search cost becomes constant. For the interested reader, we include a rewriting of Algorithm 2 with a pre-processing step on Appendix B.

Next we empirically compare our mechanism with the state of the art Madlib [madlib2020] mechanism.

5 Experiments

For fair comparison to Madlib we use TEM with Euclidean distance on a fixed embedding space from GloVe [pennington2014glove]

. Experiments use the IMDB reviews dataset

[imdbData]. More details deferred to Appendix C.

Utility: To evaluate the utility of the metric-DP mechanisms, we build sentiment classification models on training data privatized by each mechanism and the baseline trained on sensitive data, and compare the accuracy of the trained models on a test dataset.

Privacy: As both mechanisms use the Euclidean distance metric, their privacy guarantees are matched by using the same . Nonetheless, for illustration we include the results of a Membership Inference Attack (MIA) [shokri2017membership], which tries to infer the presence of observations used to train a given model based only on black-box access. Lower attack score is better, representing more privacy preservation.

(a) Utility Evaluation
(b) Empirical Privacy Evaluation
Figure 1:

Comparison of mechanisms with 95% confidence interval over 5 trials for various

. Baseline is built with models trained on original data. TEM used from Theorem 3 with .

From the results of Figure 0(a) above we see that for fixed privacy level, TEM outperforms Madlib significantly. More specifically, we see from TEM’s results that gives a formal level of privacy that is not significant. However, since Madlib adds more noise than needed for different regions of the embedding space, it still achieves some empirical privacy, as observed by the MIA results on Figure 0(b). Nonetheless, such enforcement is not a formal guarantee of privacy, which for metric-DP is bounded by , therefore in this context it is a loose guarantee of privacy for the embedding space considered. In this sense, when comparing TEM and Madlib for metric-DP formal levels of privacy, i.e. same and metric space, we can clearly see better utility for TEM. Finally, as an example, if we look at , where both mechanisms have for MIA, we see Madlib with average test accuracy of and TEM with , representing a relative utility improvement of .

6 Conclusion

We presented TEM, a mechanism for text privatization on metric differential privacy with formal guarantees. Unlike the current state-of-the-art, our method allows the safe use of sensitive embeddings and provides flexibility of metric definition. In addition, TEM adapts the noise introduced to improve utility. Finally, it gives the possibility of performing pre-processing steps for enhanced computational efficiency. Our empirical evaluation demonstrates that TEM obtains better utility than the current state-of-the-art for the same formal privacy guarantees. As future work, we plan to perform domain adaptation, in order to leverage embeddings trained on sensitive data to improve utility. Including privatization of word context vectors is also a possible enhancement for improved accuracy.

References

Appendix A Omitted Proofs

In this section we include the omitted proofs.

a.1 Proof of Theorem 2

To prove the privacy of TEM, we need to first show that the sensitivity of the score function is still after the truncation. So first we prove a Lemma giving this result on the sensitivity, and after that we prove the privacy of our version of EM.

Lemma A.1.

The sensitivity .

Proof.

For a given input , let us denote as the domain elements that have , therefore keeping their distances on the score function, while the elements on have distances fixed as .

In this context, there are four possible cases we need to analyze for a given and any pair :, which we dive deep into now.

Case 1: and . If is in both and , then it is using its original distance on the score for both and . Thus we have:

with the last inequality being the use of the triangle inequality for the distance metric .

Case 2: and . If is in but not in , then it is using its original distance on the score for and for , therefore:

Since is not in , it means that , or equivalently , which replacing on the result above gives:

where we use triangle inequality on the last step.

Case 3: and . If is not in but is in , then it is using as the distance on score for and the original distance on score for , which gives us:

Since is in , it means that , which replacing on the result above shows:

Case 4: and . If is not in both and , then it is using as distance on score for both and , giving:

Finally, we note that showing on the same cases above follows by symmetry.

With the sensitivity result we can now show the privacy guarantee of the mechanism.

Theorem 2. For any formal distance metric , Algorithm 2 is -differentially private.

Proof.

For a given output and any pair of inputs we have:

(2)

We note that on the second term above we have the same domain of elements, which even though they do not match among summations, they still satisfy the sensitivity on Lemma A.1, which gives us for the first term on the right-hand side of Equation A.1:

(3)

And similarly for the second term on the right-hand side of Equation A.1:

Therefore, multiplying the two inequalities above gives us for Equation A.1:

(4)

which proves the metric-DP guarantee of the mechanism.

The only difference in writing from TEM to what we used in the probabilities here is that instead of directly picking from all the elements with distance greater than we use first. So now we show they are equivalent.

Let be a list of elements where each element satisfies and let be the list of remaining elements . For elements in , we see that on TEM they are selected randomly after is selected. Since has score , on the exponential mechanism this is equivalent to:

This result is essentially the same as using elements with score directly for selection since, after selecting , the elements in are selected randomly, giving each a probability of selection proportional to .

a.2 Proof of Theorem 3

The following proof for Theorem 3 only uses the fact that on an exponential mechanism we have the probability of a given element proportional to the score of the element.

Theorem  3. For and input , TEM outputs elements with distance less or equal than from with probability at least for .

Proof.

This theorem is equivalent to guaranteeing that we only output elements outside the distance of the input with probability at most . The worst case to guarantee this condition is when we only have the input word inside the distance, and all of the remaining words are outside. Thus we calculate the probability in this theorem for this worst case to obtain the maximum guarantee.

Appendix B Efficient Version

Here we precisely describe a version of our algorithm with exact search for the elements within distance of any input.

A first glance at TEM reveals it can be computationally prohibitive for large domains, so below we develop a version of our mechanism that is computationally efficient leveraging the properties of our algorithm.

For a fixed finite domain , given a truncation threshold , we pre-compute, for each possible input, the list of elements that satisfy , which makes the search for possible candidates of a given input . Another simplification already included in TEM is to use a version of the Exponential Mechanism that uses Gumbel noise [durfee2019practical], which helps avoid dealing with probabilities using the exponential function. Finally, we also point out that we can group all elements below the truncation threshold as one element with the aggregated count, and then if gets selected, randomly sample one of the aggregated elements. Below we give such simplified algorithm and prove it is equivalent to Algorithm 2.

Input: Finite domain of elements, element index , truncation threshold , metric , and privacy parameter .
Output: Element index.

1:  Pre-processing:
2:  for each  do
3:     Create a list of elements where each element satisfies and let be the list of remaining elements .
4:     Define for each the score of a element as
5:  end for
6:  Selection:
7:  Given input , for every element in add noise from a Gumbel distribution with mean 0 and scale to each score
8:  Set as the element with maximum noisy score
9:  if  then
10:     Return random sample of
11:  else
12:     Return
13:  end if
Algorithm 3 Metric Truncated Exponential Mechanism

We now formally prove Algorithm 3 is equivalent to Algorithm 2.

Lemma B.1.

Algorithm 3 and Algorithm 2 are equal in distribution.

Proof.

The only difference between the algorithms is that on Algorithm 3 we pre-process and ahead of time. Since for a fixed domain they do not change and are independent for each input word, the algorithms are equal in distribution.

Since the two algorithms are equivalent, Algorithm 3 also satisfies the same privacy guarantee.

Corollary B.1.

Algorithm 3 is -DP.

Appendix C Experiments details

In this section, we describe the exact settings and include all of the details for the mechanisms used, to allow reproducibility.

We used the IMDB dataset [imdbData], which gives two different files: training data and testing data, each with 25.000 examples. For the baseline we trained a model using dataset TR1 with 50% of the IMDB training data and tested it on TE1 with 50% of the IMDB testing data. For the privatized utility, we trained models on TR1 after privatization by each mechanism and tested them on original TE1.

For MIA the models trained as described above were attacked, we denote a given target model as T. For the shadow model, denoted as S, we trained a model on dataset TE2 having the other 50% of the IMDB testing data. To train the attack model, denoted as A, we used as features the output of TE1 and TE2 given by S, where TE2 is labeled as ”in” and TE1 as ”out”. After training model A, we evaluated the inference attack with TR1 and TR2 (having another 50% of the IMDB training data) with features being the output of TR1 and TR2 obtained by a given target model T, where ground-truth for TR1 is ”in” and for TR2 is ”out.

For embeddings we used GloVe [pennington2014glove]

with 300 dimensions. The sentiment classification models follow the FastText classifier

[Joulin_2017]

, whereas the attack model is an MLP with two-layers having 64 hidden nodes each, and ReLU activations. Each model was trained for 20 epochs with batch size of 64 and default pytorch parameters for Adam optimizer.