BRR: Preserving Privacy of Text Data Efficiently on Device

07/16/2021
by   Ricardo Silva Carvalho, et al.
0

With the use of personal devices connected to the Internet for tasks such as searches and shopping becoming ubiquitous, ensuring the privacy of the users of such services has become a requirement in order to build and maintain customer trust. While text privatization methods exist, they require the existence of a trusted party that collects user data before applying a privatization method to preserve users' privacy. In this work we propose an efficient mechanism to provide metric differential privacy for text data on-device. With our solution, sensitive data never leaves the device and service providers only have access to privatized data to train models on and analyze. We compare our algorithm to the state-of-the-art for text privatization, showing similar or better utility for the same privacy guarantees, while reducing the storage costs by orders of magnitude, enabling on-device text privatization.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/30/2018

Restricted Local Differential Privacy for Distribution Estimation with High Data Utility

LDP (Local Differential Privacy) has recently attracted much attention a...
07/30/2018

Utility-Optimized Local Differential Privacy Mechanisms for Distribution Estimation

LDP (Local Differential Privacy) has been widely studied to estimate sta...
07/16/2021

TEM: High Utility Metric Differential Privacy on Text

Ensuring the privacy of users whose data are used to train Natural Langu...
07/08/2020

Privacy and Integrity Preserving Computations with CRISP

In the digital era, users share their personal data with service provide...
01/13/2022

Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation

Cellular providers and data aggregating companies crowdsource celluar si...
06/04/2022

A privacy preserving querying mechanism with high utility for electric vehicles

With the recent rise in awareness about advancing towards a sustainable ...
05/15/2020

Enabling Seamless Device Association with DevLoc using Light Bulb Networks for Indoor IoT Environments

To enable serendipitous interaction for indoor IoT environments, spontan...

1 Introduction

As users interact with more and more edge devices through text and speech, organizations are able to train models on the data received from users to offer better services and drive business. However, research has demonstrated that machine learning models trained on sensitive data can be attacked, and sensitive information about individuals extracted from the original training data

[shokri2017membership].

In order to build and maintain customer trust, companies must protect the privacy of their users, so that they will continue to use and be delighted by the products offered, without mistrust towards the platform and the data holder. The best way to ensure that is by sending no sensitive data to the service provider at all, known as the zero-trust model[dwork2006calibrating]. That way sensitive data never leaves the device, and users can be assured that even in the event of a data breach of the service provider, their private data will not be compromised.

Differential Privacy (DP) [dwork2006calibrating] has emerged as the most well-established technique to provide privacy guarantees for individuals. However, DP was originally designed to deal with continuous data, while categorical and text data pose an additional challenge [dwork2014algorithmic]. DP works by applying noise to inputs, and for a naive approach to work for text in the zero-trust model, it would have to ensure that a word can be replaced by any other word in the vocabulary, potentially ruining the utility of the algorithm.

To deal with this restriction, previous research has proposed applying a privatization step in the continuous embedding space instead [natasha2018obfuscation], using a generalization of DP for metric spaces called metric-DP [chatzikokolakis2013metricdp]

. These methods work by retrieving the embedding vector of the word we want to privatize, adding noise, then replacing the word with the one closest to the new noisy vector in the embedding space. However, these methods assume a central authority that gathers the data of all the users and applies the privatization mechanism to generate a new privatized dataset, before training downstream NLP models. If we try using these mechanisms on-device, the space cost would be prohibitive. The embedding vectors plus the nearest neighbor index necessary to retrieve the perturbed words from the noisy embedding can reach several gigabytes.

In this paper we propose an approach that allows for zero-trust text data privatization, ensuring that sensitive data never leave the device of the user, improving trust to the service. The key component of our approach is to use binary embedding vectors [tissier2019near] to dramatically shrink the space and computational costs of storing and querying the word embeddings, while maintaining semantic meaning. Using binary word embedding vector representations, we propose a text privatization mechanism based on the randomized response [wang2016rr], and prove that the mechanism satisfies metric-DP.

In summary, our contributions are the following:

  • We propose a zero-trust algorithm for on-device text privatization, using binary embeddings and randomized response.

  • A proof that our mechanism satisfies metric-DP is included, more specifically for Hamming distance.

  • We develop theoretical methods for comparing metric-DP mechanisms that use different metrics, allowing consistent privacy-utility evaluation.

  • Finally, our empirical evaluation demonstrates the computational advantages of the approach compared to the state-of-the-art, while maintaining better or similar utility.

2 Related Work

natasha2018obfuscation were one of the first to use metric differential privacy on text data. They focused on the “bag of words” representation of documents and applied the Earth Mover’s metric to obtain privatized bags, being also the first work to perform individual word privatization in the context of metric differential privacy. Following this context, the Madlib111We refer to this algorithm with the name used in [madlibBlog] mechanism [madlib2020] adds noise to embedding vectors of words, working on the Euclidean space and adding Laplacian noise. After introducing noise, the mechanism outputs the word that is closest to the noisy vector in the embedding space. The algorithm presented in [poincareMDP2019] is a follow-up to [madlib2020] although it appeared later. This mechanism works in a hierarchical embedding space, where the embedding vector of an input word is perturbed with noise from a hyperbolic distribution. These works successfully illustrated the privacy-utility trade-off on metric differential privacy, and empirically showed that we can achieve reasonable privacy guarantees with the impact on the utility of downstream text models being dependent on the complexity of the downstream task. For example, the complex question-answering task was more affected than binary classification.

poincareMDP2019 compare the hyperbolic mechanism to Madlib [madlib2020]. However, since the two algorithms use different metric functions, the evaluation of privacy via only matching the parameter of differential privacy can be improved. In this sense, poincareMDP2019

compares the privacy of the two mechanisms, looking at the probability of not changing a word after noise injection, i.e. the probability that the mechanism returns the exact same word used as input. Even though this notion can be intuitively seen as a level of indistinguishability, it cannot guarantee a fair comparison between mechanisms. In Section

5 we propose a method that ensures a more fair comparison, based on a privacy loss bound.

Finally, to the best of our knowledge, our work is the first to apply metric differential privacy on text data efficiently on-device. This application scenario allows users to share only already privatized data, keeping their sensitive information local. In this sense, our intended use is similar to the goals of Local Differential Privacy (LDP)[dwork2006calibrating]. Nonetheless, previous work on LDP focused on aggregate statistics, instead of individual word privatization. Examples are Google’s RAPPOR [fanti2016building], Apple’s DP distributed system [appleDP] and Microsoft’s Private Collection of Telemetry Data [ding2017collecting]. In the context of individual privatization of a given input, in our case word, using LDP would mean adding extremely amounts of noise, which metric-DP relaxes by including distance metrics within the differential privacy guarantees.

3 Preliminaries

Consider a user giving as input a word from a discrete fixed domain . For any pair of inputs and , we assume a distance function , in a given space of representation of these words. Specifically, we consider a word embedding model will be used to represent words, and the distance function can be a valid metric applicable to the embedding vectors.

Our goal is to select a word from , based on a given input, such that the privacy of the user, with respect to their word choice, is preserved. From an attacker’s perspective, the output of an algorithm working over an input or will become more similar as these inputs become closer according to the distance . In other words, if two words are close on the embedding space, they tend to generate similar results with same probabilities.

With that in mind, we will work on Metric-Differential Privacy [chatzikokolakis2013metricdp], a privacy standard defined for randomized algorithms with input from a domain that are equipped with a distance metric satisfying the formal axioms of a metric. In this context, algorithms satisfying metric-DP will have privacy guarantees that depend not only on the privacy parameter , but also on the particular distance metric being used.

Definition 3.1.

(Metric Differential privacy [chatzikokolakis2013metricdp]) Given a distance metric , a randomized mechanism is -differentially private if for any and all outputs we have:

(1)

Usually, on the standard definition of differential privacy[dwork2006calibrating], the privacy guarantees provided by different mechanisms are compared by looking at the value, such that mechanisms with same give the same privacy guarantee. For a fair evaluation on metric-DP using , we also have to consider the distance metrics used, since it also affects the privacy guarantees, as we can see from Definition 3.1. We describe our theoretically motivated method to enable a fair privacy comparison of mechanisms with different metrics in Section 5.

For Euclidean distance as metric, as discussed on Section 2, the current state-of-the-art is the Madlib mechanism. It uses the Laplace mechanism to add Laplacian noise to a given vector in order to obtain a private output.

0:  Finite domain , input word and privacy parameter . 
0:  Privatized word .
1:  Compute embedding
2:  Perturb embedding to obtain with noise density
3:  Obtain perturbed element:
4:  Return
Algorithm 1 - Madlib: Word Privatization Mechanism for Metric Differential Privacy

For a Euclidean metric , Madlib provides metric differential privacy.

Theorem 1.

For a Euclidean distance metric , Algorithm 1 is -differentially private. Proof in [madlib2020].

Next we develop our algorithm that satisfies metric-DP, giving formal proof of its privacy guarantees.

4 Mechanism

Our proposed mechanism will employ metric differential privacy to sanitize words via their embedding vectors. The challenge with protecting the privacy is that, for any given input word, the output of a DP algorithm as defined in [dwork2006calibrating] can be any word in the vocabulary, i.e. the outputs of a mechanism for any pair of inputs words are relatively similar. Metric-DP provides a generalization of DP that allows adjusting the privacy of an input by leveraging a given distance metric, thus being a suitable framework for improved utility in the text scenario.

However, algorithms like Madlib rely on having access to word embedding vectors and an approximate nearest neighbor index to map noisy vectors to words. The space cost of these can range from hundreds of MB to several GBs. Thus, using such representations to sanitize words on a user’s device would be impractical. In this context, recent research [tissier2019near, shen2019learning]

has focused on converting pre-trained real valued embedding vectors into binary representations. They explicitly aim at keeping the semantic meaning while transforming representations. These works show experimental results on machine learning tasks with the binarization of word embeddings leading to a loss of approximately 2% in accuracy, with the upside of reducing the embedding’s size by 97%. Thus, in this work we propose to use binary embeddings of words, obtained from transforming publicly available continuous representations, such as GloVe

[pennington2014glove] or FastText [fasttext2017]. In addition, specialized nearest neighbor indexes for binary vectors exist [norouzi2012fast], that dramatically reduce the space and time cost of nearest neighbor retrieval.

Randomized Response (RR) [warner1965randomized] is a mechanism that provably [wang2016rr] obtains better utility than the classical Laplace mechanism for binary data collection. RR is a method that dates back to 1965, its original purpose being to motivate survey respondents to answer questions truthfully, without the risk of exposing any private information. For sensitive yes/no questions, survey participants would use a spinner, similar to flipping a coin, and based on its outcome, would either respond truthfully if, for example, the coin came up heads, or respond yes otherwise. This mechanism provides individuals plausible deniability, while allowing researchers to de-bias the results and obtain the aggregate metrics they need. In our case, RR flips a given input bit of an embedding vector with probability inversely proportional to the privacy parameter . We describe a general version [wang2016rr] of RR on Algorithm 2.

0:  Bit and privacy parameter .
0:  Privatized bit .
1:  Set with probability , otherwise
2:  Return
Algorithm 2 - RR: Randomized Response

RR satisfies metric differential privacy with respect to the privacy parameter and Hamming distance. Since this is the first time RR is applied to metric-DP, we include a proof of its privacy guarantees.

Theorem 2.

For a Hamming metric , Algorithm 2 is -differentially private.

Proof.

For to satisfy metric differential privacy with Hamming distance metric , we have to show that, for two bits and and response bit we get:

(2)

For we have and also , therefore Equation 2 is satisfied.

For , from the definition of RR, we have:

Since for we also get , this means we also have Equation 2 satisfied for this case.

We now describe our mechanism, denoted as Binary embeddings over Randomized Response (BRR), described in Algorithm 3. BRR uses binary embedding vectors to represent words and applies RR to make each binary vector differentially private.

0:  Finite domain , input word and privacy parameter .
0:  Privatized word .
1:  Compute binary embedding vector
2:  Perturb word embedding vector using Randomized Response to obtain
3:  Obtain perturbed word:
4:  Return
Algorithm 3 - BRR: Mechanism for Text as Binary Embeddings over Randomized Response

With the algorithm described, we state the privacy guarantees of BRR.

Theorem 3.

For a Hamming metric , Algorithm 3 is -differentially private.

Proof.

For embeddings that are independent of the data, the nearest neighbor search is just a post-processing step, thus without privacy loss. Therefore we only have to analyze the privacy of releasing the perturbed embedding vector.

Consider as the Hamming distance and as the mechanism inside BRR that performs the embedding perturbation, such that each word has a binary embedding representation . Then for a given output vector , with ’th bit as and any pair of inputs , with ’th bits as and , from using RR on a single bit position we have that:

For bits, multiplying probabilities for each of them, we then have:

where the last step comes from the definition of the Hamming distance.

BRR is similar to the Madlib Mechanism, differing in the use of binary embeddings instead of real-valued, and Randomized Response instead of the Laplace mechanism. Since RR is proven to be better than Laplace mechanism for binary data [wang2016rr], if our semantic loss for transforming embeddings into binary is smaller than the gains of using RR instead of Laplace, then BRR is a promising approach compared to Madlib.

More importantly, our mechanism is suitable to privatize data on-device due to the use of binary embeddings and specialized nearest neighbor index. On one side, we have the memory/storage size reduction of using binary embedding vectors, and additionally we gain computational efficiency in the perturbation with RR, implemented as sampling from a binomial distribution together with a XOR operation, which can be done efficiently at the hardware level. With these optimizations, the user would only need to share already privatized data, keeping ownership of the sensitive data. This is a highly desirable feature, as it allows the use of valuable user data for NLP tasks, but sensitive information never leaves the user’s device.

5 Comparing Metric-DP Mechanisms

One issue with a fair evaluation of BRR against Madlib is that they use different distance functions. To solve this we propose fixing a privacy ratio that allows us to obtain similar privacy guarantees, even when two mechanisms use different distance metrics. Due to space limitations we provide a brief description here and refer the interested reader to Appendix A for a detailed motivation of the method.

To compare mechanisms with different distance metrics, we consider an estimate of

privacy loss bound , where is defined as an aggregate distance measurement based on the distances between all possible pairs of words. In this work, we use either where we average the distances of all pairs, or where we use the maximum distance between any two words, for each mechanism. To fairly compare the privacy of two mechanisms, we equalize their bounds via a privacy ratio.

Definition 5.1 (Method to Fix Privacy).

Given two randomized mechanisms and both taking inputs from to output space , satisfying respectively -DP and -DP, we denote the privacy ratio as . In order to ensure a similar privacy loss on both mechanisms, for any given defined for we need to set:

(3)

In practice, to compare two mechanisms and , first we calculate the aggregate distances, e.g. max or average distances between all words, and . Then we obtain the privacy ratio . Then for experiments we can simply choose for any value of and then for we set . This implies that , fixing the estimate of privacy loss for the two mechanisms considered.

By calculating the privacy ratio we can get values and to obtain similar privacy guarantees of two mechanisms and . We note that previous work [poincareMDP2019] compared the privacy of two mechanisms looking at the probability that a mechanism returns the exact same word used as input. However, such notion does not always give a fair comparison between mechanisms, as that can vary considerably on embedding spaces. In contrast, our method is theoretically robust, with well defined bounds.

6 Experiments

In this section we compare BRR to previous work in terms of privacy, utility and efficiency. Since in DP we usually have a privacy-utility trade-off when applying mechanisms, we will fix the privacy of our mechanisms in order to compare their utility, using the methodology described in Section 5

. Our experiments will use the IMDB dataset

[imdbData], with training data being 50% of the original training dataset, validation data being the other 50% of the original training data, and testing data being 50% of the original testing dataset. The experiments were performed on an AWS EC2 p3.2xlarge instance. More specific setup is described on the next paragraphs.

Privacy: In our experiments we vary for Madlib and set the for BRR using the privacy ratio. To have a more conservative approach, we use an average distance aggregation to fix privacy loss of BRR and Madlib. We note though that using would give significantly better results for BRR in comparison to Madlib.

Utility

: To evaluate the utility of the metric-DP mechanisms, we build ML models for sentiment analysis on training data privatized by each mechanism and compare the accuracy of the trained models on a separate testing dataset. For Madlib we use Euclidean distance on a fixed embedding space from GloVe

[pennington2014glove] with 300 dimensions. BRR starts from the same embedding, then transforms it into a binary representation with 256 dimensions as described in [tissier2019near]

. The sentiment classification models follow the FastText classifier

[Joulin_2017]. The accuracy on the test dataset is shown in Figure 0(a).

(a) Utility Evaluation
(b) MIA Evaluation
Figure 1:

Comparison of mechanisms with 95% confidence interval over 10 independent trials for various

. Baseline is built with models trained on original sensitive data. is first set for Madlib and transformed for BRR using the method from Section 5.

As seen in Figure 0(a), in terms of utility, we obtain similar results for Madlib’s under 5, and improved results for BRR for larger , where each setting is averaged over 5 independent trials, with the shaded are showing the 95% confidence interval.

Defense against attacks: We also include the results of a Membership Inference Attack (MIA) [shokri2017membership], which tries to infer the presence of observations used to train a given model based only on black-box access, that is, the attacker is only able to query the deployed model and does not have access to specific weights. Lower score of the attack is better, representing more privacy preservation.

In summary, we have a target model that we can only query, e.g. through a public API, which we plan to attack to determine membership of particular users in its training set. To attack the target model we train a shadow model, based on disjoint data we have available. This model tries to emulate the behavior of the target model. For example, we could try attacking a next-word prediction model trained on private data, by training a similar model using data from a user’s public Twitter feed. Our dataset will then have two labels. The first called member we use to train the shadow model. The second, called non-member we set aside to train an attack model that will tell us if an example was part of the training data of the target model.

For this step, we use 50% of the original IMDB testing data to train a shadow model, another 50% to validate it. Models created for utility evaluation are the target for the attack. The attack model is an MLP with two layers having 64 hidden nodes each, with ReLU activations. Results are on Figure 

0(b). We can see that for the various privacy levels represented by tested, we obtain practical privacy protection, represented by the drop in AUC of the attack model. More specifically, we see that as we increase , which decreases the formal privacy guarantees of DP, we also obtain also less empirical privacy, represented by larger AUC. In this case, for very large we see a bigger impact on MIA for BRR. However, even though Madlib has smaller impact seen on MIA, that is not a formal privacy guarantee. Therefore, such large values of are not recommended for both mechanisms on the dataset analyzed. Finally, we note that for , where both mechanisms have AUC of approximately for MIA, we see BRR with similar or better utility than Madlib.

Efficiency: To evaluate how efficient the mechanisms are on different aspects, we consider the size of embedding vectors and index for approximate nearest neighbors, the wall time of privatization per word and total wall time per mechanism. For nearest neighbors search, we use FAISS [faissJDH17], a library that deals with both real and binary vectors.

For our relatively small vocabulary, the nearest neighbors index built for BRR achieved a compression rate of 97.9% (4MB vs. 200MB), while the vocabulary file, along with embeddings, was also 98.5% smaller (6MB vs. 300MB) compared to the ones used by Madlib.

Figure 2: Wall-time comparison between BRR and Madlib on IMDB dataset.

For a fair time comparison, we looked for optimal ways of improving the computation of nearest neighbors during privatization of both BRR and Madlib. This step is the most time consuming of both methods and Madlib uses real-valued data with the Euclidean distance, while BRR uses binary data and the Hamming distance.

In our experiments, BRR was on average 68% faster to privatize a word compared to Madlib, as we can see on Figure 2. Using binary embedding vectors significantly improved the running time of our solution compared to using real-valued vectors, due to the possibility of using more efficient algorithms and hardware-level optimizations through binary operations. We used the FAISS library222https://github.com/facebookresearch/faiss that implements algorithms [norouzi2012fast] tailored for binary data, for both exact and approximate nearest neighbors search. We note that FAISS has additional features in the library that we did not apply but could be further explored, such as product quantization, which is a technique for lossy compression of high-dimensional vectors, and PCA. Similarly, for Madlib we use approximate nearest neighbors, more specifically the Annoy library333https://github.com/spotify/annoy. This speeds the retrieval at the cost of losing the guarantee to find the exact nearest neighbor.

7 Conclusion

We presented a mechanism for efficient text privatization on-device with formal differential privacy guarantees. We demonstrated that our new mechanism enables performing privatization on-device through the use of binary embeddings, providing zero-trust privacy for customers, while maintaining better or competitive utility than the state-of-the-art. As future work, we plan to implement improvements in the nearest neighbors search step, exploring smaller dimensionality of embedding vectors and including privatization of word context vectors for improved utility. We are currently exploring other DP mechanisms like using the exponential mechanism[mt2007expmech]

that allows for the use of any metric function in the privatization process. Finally, a related line of work is how to achieve on-device privacy for tasks like Automatic Speech Recognition.

References

Appendix A Motivation for Privacy Ratio

In the standard definition of differential privacy [dwork2006calibrating], the privacy guarantees provided by different mechanisms are compared by looking only at the value: mechanisms with same provide the same privacy guarantees. However, referring to Equation 1 we see that for metric-DP the privacy bound is directly impacted not only by but also by the distance metric. As a result, comparing mechanisms with different metrics is a non-trivial task. In this context we propose calculating, for each mechanism, a privacy loss bound for all .

To compare privacy among mechanisms first we look at the privacy loss, which is a general function defined for randomized mechanisms, not specific to any differential privacy definition.

Definition A.1.

Let be a randomized mechanism with density function , then for any and all outputs

the privacy loss function is defined as:

(4)

Every differentially private mechanism will have by definition an upper bound for , and more specifically, metric-DP, as we show next.

Corollary A.1.

Given a randomized mechanism that is -differentially private for a given distance function , we have that for any and all outputs :

(5)

From Equation 5 above we can see that the privacy loss bound of any pair depends on their distance and . Therefore, for a given metric we propose calculating for every possible pair in order to obtain the overall bound on the privacy loss. With these distances, we can define a privacy measurement for the maximum or average, as we now formalize.

Definition A.2.

For a given distance function and finite space , we define the privacy measurement and as:

(6)
(7)

For example, in the context of words as input, can be calculated using a given distance metric on the embedding space for every word in the vocabulary. Comparing the two privacy measurements above, we note that has the advantage of giving an overall bound on the privacy loss. This can be derived from Equations 5 and 6, such that for any and all outputs , the privacy loss is bounded by:

(8)

On the other hand, gives average distance bounds over a given space

, which is a more conservative approach and may be of interest to avoid spaces with outliers that individually would largely affect the privacy loss upper bound.

Given the above, or can be used as estimates of privacy loss bounds, which allows us to fairly compare mechanisms using different distance metrics, as defined in Section 5 by equalizing their bounds through a privacy ratio.