Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text

10/20/2019 ∙ by Oluwaseyi Feyisetan, et al. ∙ 0

Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe > 20x greater guarantees on expected privacy against comparable worst case statistics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Accepted at ICDM 2019

In Machine Learning (ML) tasks and Artificial Intelligence (AI) systems, training data often consists of information collected from users. This data can be sensitive; for example, in conversational systems, a user can explicitly or implicitly disclose their identity or some personal preference during their voice interactions. Explicit personally identifiable information (PII

) (such as an individual’s PIN or SSN) can potentially be filtered out via rules or pattern matching. However, more subtle privacy attacks occur when seemingly innocuous information is used to discern the private details of an individual

[20]. This can lead to a number of attack vectors – ranging from human annotators making deductions on user queries [24] to membership inference attacks being launched against machine learning models that were trained on such data [49]. As a result, privacy-preserving analysis has increasingly been studied in statistics, machine learning and data mining [19, 48] to build systems that provide better privacy guarantees.

Of particular interest are these implicit, subtle privacy breaches which occur as a result of an adversary’s ability to leverage observable patterns in the user’s data. These tracing attacks have been described to be akin to ‘fingerprinting’ [9] due to their ability to identify the presence of a user’s data in the absence of explicit PII values. The work by [51] demonstrates how to carry out such tracing attacks on ML models by determining if a user’s data was used to train the model. These all go to illustrate that the traditional notion of PII which is used to build anonymization systems is fundamentally flawed [47]. Essentially, any part of a user’s information can be used to launch these attacks, and we are therefore in a post-PII era [20]

. This effect is more pronounced in naturally generated text as opposed to statistical data where techniques such as

DP have been established as a de facto way to mitigate these attacks.

While providing quantifiable privacy guarantees over a user’s textual data has attracted recent attention [13, 57], there is significantly more research into privacy-preserving statistical analysis. In addition, most of the text-based approaches have focused on providing protections over vectors, hashes and counts [21, 56]. The question remains: what quantifiable guarantees can we provide over the actual text? We seek to answer that question by adopting the notion of -privacy [4, 10, 11], an adaptation of Local (LDP) [28] which was designed for providing privacy guarantees over location data. -privacy generalizes DP beyond Hamming distances to include Euclidean, Manhattan and Chebyshev metrics, among others. In this work, we demonstrate the utility of the Hyperbolic distance for -privacy in the context of textual data. This is motivated by the ability to better encode hierarchical and semantic information in Hyperbolic space than in Euclidean space [31, 38, 37].

At a high level, our algorithm preserves privacy by providing plausible deniability [7] over the contents of a user’s query. We achieve this by transforming selected words to a high dimensional vector representation in Hyperbolic space as defined by Poincaré word embeddings [38]. We then perturb the vector by sampling noise from the same Hyperbolic space with the amount of added noise being proportional to the privacy guarantee. This is followed by a post-processing step of discretization where the noisy vector is mapped to the closest word in the embedding vocabulary. This algorithm conforms to the -privacy model introduced by [4] with our transformations carried out in higher dimensions, in a different metric space, and within a different domain. To understand why this technique preserves privacy, we describe motivating examples in Sec. II-A and define how we quantify privacy loss by using a series of interpretable proxy statistics in Sec. VI.

I-a Contributions

Our contributions in this paper are summarized as follows:

  1. We demonstrate that the Hyperbolic distance metric satisfies -privacy by providing a formal proof in the Lorentz model of Hyperbolic space.

  2. We define a probability distribution in Hyperbolic space for getting noise values and also describe how to sample from the distribution.

  3. We evaluate our approach by preserving privacy against an attribution algorithm, baselining against a Euclidean model, and preserving utility on downstream systems.

Ii Privacy Requirement

Consider a user interacting freely with an AI system via a natural language interface. The user’s goal is to meet some specific need with respect to an issued query . The expected norm in this specific context would be satisfying the user’s request. A privacy violation occurs when is used to make personal inference beyond what the norm allows [39]. This generally manifests in the form of unrestricted PII present in (including, but not restricted to locations, medical conditions or personal preferences [47]). In many cases, the PII contains more semantic information than what is required to address the user’s base intent and the AI system can handle the request without recourse to the explicit PII (we discuss motivating examples shortly). Therefore, our goal is to output , a semantic preserving redaction of that preserves the user’s objective while protecting their privacy. We approach this privacy goal along two dimensions (described in Sec. VI): (i) uncertainty – the adversary cannot base their guesses about the user’s identity and property on information known with certainty from ; and (ii) indistinguishability – the adversary cannot distinguish whether an observed query was generated by a given user’s query , or another similar query .

To describe the privacy requirements and threat model, we defer to the framework provided by [55]. First, we set our privacy domain to be the non-interactive textual database setting where we seek to release a sanitized database to an internal team of analysts who can visually inspect the queries. We also restrict this database to the one user, one query model – i.e., for the baseline, we are not concerned with providing protections on a user’s aggregate data. In this model, the analyst is a required part of the system, thus, it is impossible to provide semantic security where the analyst learns nothing. This is only possible in a three-party cryptographic system (e.g. under the Alice, Bob and Eve monikers) where the analyst is different from the attacker (in our threat model, the analyst is simultaneously Bob and Eve).

We address purely privacy issues by considering that the data was willingly made available by the user in pursuit of a specific objective (as opposed to security issues [6] where the user’s data might have been stolen). Therefore, we posit that the user’s query is published and observable data. Our overall aim is to protect the user from identity and property inference i.e., given , the analyst should neither be able to infer with certainty, the user’s identity, nor some unique property of the user.

Ii-a Motivating examples

To illustrate the desired functionality, which is to infer the user’s high level objective while preserving privacy, let us consider the examples in Tab. I from the Snips dataset [16]:

Intent Sample query New word
GetWeather will it be colder in ohio (that) state
PlayMusic play techno on lastfm music; app
BookRestaurant book a restaurant in milladore (the) city
RateBook rate the firebrand one of 6 stars product
SearchCreativeWork i want to watch manthan (a) movie


TABLE I: Examples from the Snips dataset

In the examples listed, the underlined terms correspond to the well defined notion of ‘slot values’ while the other words are known as the ‘carrier phrase’. The slot values are essentially ‘variables’ in queries which can take on different values and are identified by an instance type. We observe therefore that, replacing the slot value with a new word along the similarity or hierarchical axis does not change the user’s initial intent. As a result we would expect = ‘play (music, song) from app

’ to be classified in the same way as

= ‘play techno on lastfm’. We are interested in protecting the privacy of one user, issuing one query, while correctly classifying the user’s intent. This model is not designed to handle multiple queries from a user, neither is it designed for handling exact queries e.g. booking the ‘specific restaurant in milladore’.

Our objective is to create a privacy preserving mechanism that can carry out these slot transformations in a principled way, with a quantifiable notion of privacy loss.

Iii Privacy mechanism overview

In this section we review -privacy as a generalization of DP over metric spaces. Next, we introduce word embeddings as a natural vector space for -privacy over text. Then, we give an overview of the privacy algorithm in Euclidean space, and finish by discussing why Hyperbolic embeddings would be a better candidate for the privacy task.

Iii-a Broadening privacy over metric spaces

Our requirement warrants a privacy metric that confers uncertainty via randomness to an observing adversary while providing indistinguishability on the user inputs and mechanism outputs. Over the years, DP [19] has been established as mathematically well-founded definition of privacy. It mathematically guarantees that an adversary observing the result of an analysis will make essentially the same inference about any user’s information, regardless of whether the user’s data is or is not included as an input to the analysis. Formally, DP is defined on adjacent datasets and that differ in at most a single row, i.e., the Hamming distance between them is at most and which satisfy the following inequality: We say that a randomized mechanism satisfies DP if for any the distributions over outputs of and satisfy the following bound: for all we have

(1)

where is the Hamming distance and is the measure of privacy loss. [10] generalized the classical definition of DP by exploring other possible distance metrics which are suitable where the Hamming distance is unable to capture the notion of closeness between datasets (see Fig. 1 for other distance metrics).

Fig. 1: Contour plots of different metrics [44]. Left to right: Manhattan distance, Euclidean distance, Chebyshev distance, Minkowski distance ( shown here)

For example, a privacy model built using the Manhattan distance metric can be used to provide indistinguishability when the objective is to release the number of days from a reference point [10]. Similarly, the Euclidean distance on a plane can be used to preserve privacy while releasing a user’s longitude and latitude to mobile applications [11]. Finally, the Chebyshev distance can be adopted to to perturb the readings of smart meters thereby preserving privacy on what TV channels or movies are being watched [55].

In order to apply -privacy to the text domain, first, we need a way to organize words in a space equipped with an appropriate distance metric. One way to achieve this is by representing words using a word embedding model.

Iii-B Word embeddings and their metric spaces

Word embeddings organize discrete words in a continuous metric space such that their similarity in the embedding space reflects their semantic or functional similarity. Word embedding models like Word2Vec [36], GloVe [42], and fastText[8] create such a mapping of a set of words into n-dimensional Euclidean space. The distance between words is measured by the distance function . This follows as where denotes the Euclidean norm on . The vectors are generally learned by proposing a conditional probability for observing a word given its context words or by predicting the context giving the original word in a large text corpus [36].

Iii-C The privacy mechanism in Euclidean space

Our -privacy algorithm is similar to the model introduced by [23] for privacy preserving text analysis, and [22] for author obfuscation. The algorithms are all analogous to that originally proposed by [4] and we describe it here using the Euclidean distance for word embedding vectors. In the ensuing sections, we will justify the need to use embedding models trained in Hyperbolic space (Sec. III-D) while highlighting the changes required to make the algorithm work in such space. This includes the Hyperbolic word embeddings in Sec. V-E, describing the noise distribution in Sec. V-B, and how to sample from it in Sec. V-C

Input: string , privacy parameter
1 for  do
2        Word embedding Sample noise with density Perturb embedding with noise Discretization Insert in th position of
release
Algorithm 1 Privacy Mechanism

Iii-D The case for Hyperbolic space

Even though Euclidean embeddings can model semantic similarity between discrete words in continuous space, they are not well attuned to modeling the latent hierarchical structure of words which are required for our use case. To better capture semantic similarity and hierarchical relationships between words (without exponentially increasing the dimensionality of the embeddings), recent works [38, 37, 25] propose learning the vector representation in Hyperbolic space . Unlike the Euclidean model, the Hyperbolic model can realize word hierarchy through the norms of the word vectors and word similarity through the distance between word vectors (see Eqn. 2). Apart from hypernymy relationships (e.g., London England), Hyperbolic embeddings can also model multiple latent hierarchies for a given word (e.g., London Location and London City). Capturing these is-a relationships (or concept hierarchies) using Hyperbolic embeddings was recently demonstrated by [32] using data from large text corpora.

Furthermore, for Euclidean models such as [23, 22], the utility degrades badly as the privacy guarantees increase. This is because the noise injected (line of Alg. 1) increases to match the privacy guarantees, resulting in words that are not semantically related to the initial word. The space defined by Hyperbolic geometry (Sec IV), in addition to the distribution of words as concept hierarchies does away with this problem while preserving privacy and utility of the user’s query.

Iv Hyperbolic space and Geometry

Hyperbolic space is a homogeneous space with constant negative curvature [31]. The space exhibits hyperbolic geometry, characterized by a negation of the parallel postulate with infinite parallel line passing through a point. It is thus distinguished from the other two isotropic spaces: Euclidean , with zero (flat) curvature; and spherical , with constant positive curvature. Hyperbolic spaces cannot be embedded isometrically into Euclidean space, therefore embedding results in every point being a saddle point. In addition, the growth of the hyperbolic space area is exponential (with respect to the curvature and radius ), while Euclidean space grows polynomially (see Tab. II for a summary of both spaces).

Property Euclidean Hyperbolic
Curvature
Parallel lines
Triangles are normal thin
Sum of angles
Circle length sinh
Disk area (cosh
TABLE II: Properties of Euclidean and hyperbolic geometries. Parallel lines is the number of lines parallel to a line and that go through a point not on this line, and [31]

As a result of the unique characteristics of hyperbolic space, it can be constructed with different isomorphic models. These include: the Klein model, the Poincaré disk model, the Poincaré half-plane model, and the Lorentz (or hyperboloid) model. In this paper, we review two of the models: the Lorentz model, and the Poincaré model. We also highlight what unique properties we leverage in each model and how we can carry out transformations across them.

Iv-a Poincaré ball model

The dimensional Poincaré ball is a model of hyperbolic space where all the points are mapped within the dimensional open unit ball i.e., where is the Euclidean norm. The boundary of the ball i.e., the hypersphere is not part of the hyperbolic space, but it represents points that are infinitely far away (see Fig. 1(a)

). The Poincaré ball is a conformal model of hyperbolic space (i.e., Euclidean angles between hyperbolic lines in the model are equal to their hyperbolic values) with metric tensor:

111The metric tensor (like a dot product) gives local notions of length and angle between tangent vectors. By integration local segments, the metric tensor allows us to calculate the global length of curves in the manifold where and is the Euclidean metric tensor. The Poincaré ball model then corresponds to the Riemannian manifold . Considering that the unit ball represents the infinite hyperbolic space, we introduce a distance metric by: where is the Poincaré distance and is the Euclidean distance from the origin. Consequently, the growth in distance as , which proves the infinite extent of the ball. Therefore, given points (e.g. representing word vectors) we define the isometric invariant: [52]

then the distance function over is given by:

(2)

The main advantage of this model is that it is conformal - as a result, earlier research into Hyperbolic word embeddings have leveraged on this model [2, 25, 53]. Furthermore, there were existing artifacts such as the Poincaré embeddings by [38] built with this model that we could reuse for this work.

Iv-B Lorentz model

The Lorentz model (also known as the hyperboloid or Minkowski model) is a model of hyperbolic space in which points are represented by the points on the surface of the upper sheet of a two-sheeted hyperboloid in ()-dimensional Minkowski space. It is a combination of dimensional spatial Euclidean coordinates for ; and a time coordinate . Therefore, given points , the Lorentzian inner product (Minkowski bilinear form) is:

(3)

The product of a point with itself is , thus, we can compute the norm as . We define the Lorentz model as a Riemannian manifold where: and the metric tensor . Hence, given the vector representation of a word at the origin in Euclidean space as , the word’s corresponding vector location in the Hyperboloid model is where the first coordinate for is:

(4)

The hyperbolic distance function admits a simple expression in and it is given as:

(5)

This distance function satisfies the axioms of a metric space (i.e. identity of indiscernibles, symmetry and the triangle inequality). Its simplicity and satisfaction of the axioms make it the ideal model for constructing our privacy proof.

(a)
(b)
(c)
Fig. 2: (a) Tiling a square and triangle in the Poincaré disk such that all line segments have identical hyperbolic length. (b) The forward sheet of a two-sheeted hyperboloid in the Lorentz model. (c) Projection [52] of a point in the Lorentz model to the Poincaré model (d) Embedding WebIsADb is-a relationships in the GloVe vocabulary into the Poincaré disk

Iv-C Connection between the models

Both models essentially describe the same structure of hyperbolic space characterized by its constant negative curvature. They simply represent different coordinate charts in the same metric space. Therefore, the Lorentz and Poincaré model can be related by a diffeomorphic transformation that preserves all the properties of the geometric space, including isometry. From Fig. 1(c), we observe that a point in the Poincaré model is a projection from the point

in the Lorentz model, to the hyperplane

by intersecting it with a line drawn through . Consequently, we can map this point across manifolds from the Lorentz to the Poincaré model via the transformation where:

(6)

In this work, we only require transformations to the Poincaré model i.e., using Eqn. 4 and 6. Mapping points back from Poincaré to Lorentz is done via:

As a result of the equivalence of the models, in this paper, we adopt the Lorentz model for constructing our -privacy proof while the word embeddings were trained in the Poincaré ball model. Consequently, the Poincaré model is also used as the basis for sampling noise from a high dimensional distribution to provide the privacy and semantic guarantees.

V Privacy proof and Sampling mechanism

In this section, we provide the proof of -privacy for Hyperbolic space embeddings. We will be using the Lorentz model of [37] rather than the Poincaré model proposed in [38]. Then in Sec. V-B, we introduce our probability distribution for adding noise (line of Alg. 1) to the word embedding vectors. Finally, we describe how to sample (line of Alg. 1) from the proposed distribution in Sec. V-C. We note that whereas the privacy proof is provided in the Lorentz model, the noise distribution and the embeddings are in the Poincaré model. See Sec. IV-C for discussions on the equivalence of the models.

V-a -privacy proof

In this section, we will show -privacy for the Hyperboloid embeddings of [37]. In the following, given , we use the Lorentzian inner product from Eqn 3 i.e. The space , where is the hyperboloid model of -dimensional (real) hyperbolic space.

Lemma 1.

If , then with equality only if .

Proof.

Using the Cauchy-Schwarz inequality for the Euclidean inner product in for the first inequality and a simple calculation for the second, we have:

Any line through the origin intersects in at most one point, so Cauchy’s inequality is an equality if and only if (as a consequence of using the positive roots).

Now, as was the case for the Euclidean metric, we can use the triangle inequality for the metric which implies that for any we have the following inequality:

Thus, as before by plugging the last two derivations together and observing the the normalization constants in and are the same, we obtain:

Thus, for the mechanism is -privacy preserving. The proof for is identical to the Euclidean case of [23].

V-B Probability distribution for sampling noise

In this section we describe the Hyperbolic distribution from which we sample our noise perturbations. One option was sampling from the Hyperbolic normal distribution proposed by

[45] (discussed in [35] and [40]) with pdf:

However, our aim was to sample from the family of Generalized Hyperbolic distributions which reduce to the Laplacian distribution at particular location and scale parameters. By taking this approach, we can build on the proof proposed in the Euclidean case of -privacy where noise was sampled from a planar Laplace distribution [4, 10].

In the Poincaré model of Hyperbolic spaces, we have the following distance function defined in Eqn 2:

Now, analogous to the Euclidean distance used for the Laplace distribution, we wish to construct a distribution that matches this distance function. This will take the form:

In all cases, our noise will be centered at , and hence :

Next, we set a new variable and observe:

Reinserting the variable and simplifying

The normalization constant is then:

where is the hypergeometric function defined for by the power series

where is the Pochhammer symbol (rising factorial). Note that does not depend on , and hence can be computed in closed form a-priori. Our distribution is therefore:

(7)

The result shown in Fig. 3 for different values of illustrates the PDF of the new distribution derived from Eqn. 7.

Fig. 3: PDF of Eqn. 7 at different values of

V-C Sampling from the distribution

Since we are unable to sample directly from the high dimensional hyperbolic distribution in Eqn. 7, we derive sampled points by simulating random walks over it using the Metropolis-Hastings (MH) algorithm. A similar approach was adopted by [35] and [40] to sample points from high dimensional Riemannian normal distributions using Monte Carlo samples. We start with which is our desired probability distribution as defined in Eqn. 7 where is the -privacy parameter. Then we choose a starting point to be the first sample. The point is set at the origin of the Poincaré model (see Fig. 1(c)). The sample is then updated as the current point .

To select the next candidate , MH requires the point be sampled ideally from a symmetric distribution such that

for example, a Gaussian distribution centered at

. To achieve this, we sampled from the multivariate normal distribution in Euclidean space, centered at . The sampled point is then translated to the Lorentz model in dimensional Minkowski space by setting the first coordinate using Eqn. 4. The Lorentz coordinates are then converted to the Poincaré model in Hyperbolic space using Eqn. 6. Therefore, the final coordinates of the sampled point is in the Poincaré model.

Next, for every MH iteration, we calculate an acceptance ratio with our privacy parameter . If is less than a uniform random number , we accept the sampled point by setting (and sample the next point centered at this new point), otherwise, we reject the sampled point by setting to the old point .

Input: dimension , , privacy parameter
Result: results from
1 Let be the Hyperbolic noise distribution in dimensions set set set as the initial sample burn-in period while  do
2        sample translate compute sample if  then
3               accept sample set
4        else
5               reject sample set
6       
release
Algorithm 2 Hyperbolic Noise Sampling Mechanism

V-D Ensuring numerical stability

Sampling in high dimensional Hyperbolic spaces comes with numeric stability issues [38, 37, 35]. This occurs as the curvature and dimensionality of the space increases. This leads to points being consistently sampled at an infinite distance from the mean. Using an approach similar to [38], we constrain the updated vector to remain within the Poincaré ball by updating the noisy vectors as:

where is a small constant. This occurs as a post-processing step and therefore does not affect the -privacy proof. In our experiments, we set the value to be as in [38].

V-E Poincaré embeddings for our analysis

The geometric structure of the Poincaré embeddings represent the metric space over which we provide privacy guarantees. By visualizing the embeddings in the Poincaré disk (see Fig. 4), we observe that higher order concepts are distributed towards the center of the disk, instances are found closer to the perimeter, and similar words are equidistant from the origin.

In this work, we train the Poincaré embeddings described in [38]. To train, we use data from WebIsADB, a large database of over million hypernymy relations extracted from the CommonCrawl web corpus. We narrowed the dataset by only selecting relations of words (i.e., both the instance and the class) that occurred in the GloVe vocabulary. To ensure we had high quality data, we further restricted the data to links that had been found at least

times in the CommonCrawl corpus. Finally, we filtered out stop words, offensive words and outliers (words with

links) from the dataset, resulting in extracted is-a relations. We use the final dataset to train a dimension Poincaré embedding model. We use this model for all our analysis and experiments.

Fig. 4: Embedding WebIsADb is-a relationships in the GloVe vocabulary into the Poincaré disk

Vi Privacy calibration

In this section, we describe our approach for calibrating the values of for a given mechanism . For all our discussions, means the privacy mechanism returns the same word, while represents a different random word from the algorithm. We now define the privacy guarantees that result in uncertainty for the adversary over the outputs of , and indistinguishability over the inputs to .

Vi-a Uncertainty statistics

The uncertainty of an adversary is defined over the probability of predicting the value of the random variable

i.e. . This follows from the definition of Shannon entropy which is the number of additional bits required by the adversary to reveal the user’s identity or some secret property. However, even though entropy is a measure of uncertainty, there are issues with directly adopting it as a privacy metric [55] since it is possible to construct different probability distributions with the same level of entropy.

Nevertheless, we still resort to defining the uncertainty statistics by using the two extremes of the Rényi entropy [43]. The Hartley entropy is the special case of Rényi entropy with . It depends on vocabulary size and is therefore a best-case scenario as it represents the perfect privacy scenario for a user as the number of words grow. It is given by . Min-entropy is the special case with which is a worst-case scenario because it depends on the adversary attaching the highest probability to a specific word . It is given by .

We now describe proxies for the Hartley and Min-entropy. First, we observe that the mechanism at has full support over the entire vocabulary . However, empirically, the effective number of new words returned by the mechanism over multiple runs approaches a finite subset. As a result, we can expect that for a finite number of successive runs of the mechanism . We define this effective number at each value of for each word as

. Therefore, our estimate of the Hartley entropy

becomes .

Similarly, we expect that over multiple runs of the mechanism , as , the probability increases and approaches . As a result, we can expect that for a finite number of successive runs of the mechanism . We define this number at each value of , and for each word as . Therefore, our estimate of the Min-entropy becomes .

We estimated the quantities and empirically by running the mechanism times for a random population ( words) of the vocabulary at different values of . The results are presented in Fig. 4(a) and 4(b) for and Fig. 4(c) for .

(a) average case
(b) worst case
(c) uncertainty statistics
(d) indistinguishability
Fig. 5: Privacy statistics – (a) statistics: avg count of (b) statistics: max count of (c) statistics: distinct outputs for (d) statistics: count of different words which resolve to same output

Vi-B Indistinguishability statistics

Indistinguishability metrics of privacy indicate denote the ability of the adversary to distinguish between two items of interest. -privacy provides degrees of indistinguishability of outputs bounded by the privacy loss parameter . For example, given a query ‘send the package to London’, corresponding outputs of ‘send the package to England’ or ‘…to Britain’ provide privacy by output indistinguishability for the user. This is captured as uncertainty over the number of random new words as expressed in the metric.

However, we also extend our privacy guarantees to indistinguishability over the inputs. For example, for an adversary observing the output ‘send the package to England’, they are unable to infer the input that created the output because, for the permuted word England, the original word could have been any member of the set , where implies that is lower than in the embedding hierarchy. For example, . Since this new statistic derives from , we expect it to vary across in the same manner. Hence, we replace with and define the new statistic as:

This input indistinguishability statistic can be thought of formally in terms of plausible deniability [7]. In [7], plausible deniability states that an adversary cannot deduce that a particular input was significantly more responsible for an observed output. This means, there exists a set of inputs that could have generated the given output with about the same probability. Therefore, given a vocabulary size and mechanism such that , we get indistinguishability over the inputs with probability if there are at least distinct words \ such that:

We also estimated the values of empirically by running the mechanism times for a random population ( words) of the vocabulary at different values of . The results are presented in Fig. 4(d).

Vi-C Selecting a value of

To set the value of for a given task, we propose following the guidelines offered by [11] in the context of location privacy by providing appropriate reformulations. They suggest mapping to a desired radius of high protection within which, all points have the same distinguishability level. We can achieve a corresponding calibration using results in Fig. 5.

The worst-case guarantees highlighted by the upper bound of the statistic (see Fig. 4(b)) equips us with a way to fix an equivalent ‘radius of high protection’. This ‘radius’ corresponds to the upper bound on the probability which sets the guarantee on the likelihood of changing any word in the embedding vocabulary. Consequently, the words which are provided with the ‘same distinguishability level’ can be interpreted by the size of the results in Fig. 4(c), and by extension, Fig. 4(d). In the following sections, we investigate the impact of setting varying values of on the performance of downstream ML models, and how the privacy guarantees of our Hyperbolic model compares to the Euclidean baseline.

Vii Experiments

We carry out experiments to illustrate the tradeoff between privacy and utility. The first two are privacy experiments, while the third is a set of utility experiments against ML tasks.

Vii-a Evaluation metrics

  • Author predictions: the number of authors that were re-identified in a dataset. Lower is better for privacy.

  • : number of times (of ) where the mechanism returned the original word. Lower is better for privacy.

  • Accuracy: is the percentage of predictions the downstream model got right. Higher is better for utility

Vii-B Privacy experiments I

In this section, we describe how we carried out privacy evaluations using an approach similar to [22].

Vii-B1 Task

The task is to carry out author obfuscation against an authorship attribution algorithm.

Vii-B2 Baselines

Just as in [22], the ‘adversarial’ attribution algorithm is Koppel’s algorithm [30]. Evaluation datasets were selected from the PAN@Clef tasks as follows:

  • PAN11 [5] the small dataset contained documents by authors while the large set had documents by authors. Both were derived from the Enron emails.

  • PAN12 [27] unlike the short email lengths per author in PAN11, this dataset consisted of dense volumes per author. Set-A had authors with between and words; C and D had authors with words each; while set-I consisted of authors of novels with word counts ranging between to .

Vii-B3 Experiment setup

We ran each dataset against Koppel’s algorithm [30] to get the baseline. Each dataset was then passed through our -privacy algorithm to get a new text output. This was done line by line in a manner similar to [22] i.e. all non stop words were considered. We evaluated our approach at the following values of and .

Vii-B4 Privacy experiment results

The results in Table III show that our algorithm provides tunable privacy guarantees against the authorship model. It also extends guarantees to authors with thousands of words. As increases, the privacy guarantees decrease as clearly evidenced by the PAN11 tasks. We only show results for Kopell’s algorithm because other evaluations perform worse on the baselines. e.g. PAN18 identifies only and authors, while PAN19-SVM identifies and authors in the PAN11 small and large datasets.

Pan- Pan-
small large set-A set-C set-D set-I
TABLE III: Correct author predictions (lower is better)

Vii-C Privacy experiments II

We now describe how we evaluate the privacy guarantees of our Hyperbolic model against the Euclidean baseline.

Vii-C1 Task and baselines

The objective was to compare the expected privacy guarantees for our Hyperbolic vs. the Euclidean baseline, given the same worst case guarantees. We evaluated against , and GloVe embeddings.

Vii-C2 Experiment setup

We designed the -privacy algorithm in the Euclidean space as follows: (a) the embedding model was GloVe, using the same vocabulary as in the Poincaré embeddings described in Sec. V-E; (b) we sampled using the multivariate Laplacian distribution, by extending the planar Laplacian in [4] using the technique in [23]; (c) we calibrate Euclidean values by computing the privacy statistics for a given Hyperbolic value.

To run the experiment, we repeat the following for the Hyperbolic and Euclidean embeddings: (1) first, we select a value of , (2) we empirically compute the worst case guarantee, i.e. the largest maximum number of times we get any word rather than selecting a new word after our noise perturbation, (3) we compute the expected guarantee, i.e. the average number of times we get all words each time we perturb the word .

Vii-C3 Privacy experiment results

The results for the comparative privacy analysis are presented in Tab. IV. The results clearly demonstrate that for identical worst case guarantees, the expected case for the Hyperbolic model is significantly better than the Euclidean across all Euclidean dimensions. Combining this with the superior ability of the Hyperbolic model to encode both similarity and hierarchy even at lower dimensions provides a strong argument for adopting it as a -privacy preserving mechanism for the motivating examples described in Sec. II-A.

expected value
worst-case hyp- euc- euc- euc-
1.25
1.62
2.07
3.92
140.67

TABLE IV: Privacy comparisons (lower is better)

Vii-D Utility experiments

Having established Hyperbolic embeddings as being better than the Euclidean baseline for -privacy, we now demonstrate its effects on the utility of downstream models (i.e. we conduct utility experiments only on Hyperbolic embeddings).

Vii-D1 Ml Tasks

we ran experiments on tasks ( classification and natural language tasks) to highlight the tradeoff between privacy and utility for a broad range of tasks. See Tab. V for a summary of the tasks and datasets.

name samples task classes example(s)
MR [41] sentiment (movies) neg, pos
CR [26] product reviews neg, pos
MPQA [58] opinion polarity neg, pos
SST-5 [50] sentiment (movies) 0
TREC-6 [33] question-type LOC:city
SICK-E [34] natural language inference contradiction
MRPC [18] paraphrase detection paraphrased
STS14 [1] semantic textual similarity 4.6
TABLE V: Classification and natural language tasks

Vii-D2 Task baselines

the utility results were baselined using SentEval [15], an evaluation toolkit for sentence embeddings. We evaluated the utility of our algorithm against an upper and lower bound.

To set an upper bound on utility, we ran each task using the original datasets. Each task was done on the following embedding representations: (1) InferSent [14], (2) SkipThought [29] and (3) fastText [8] (as an average of word vectors).

To set a lower bound on the utility scores, rather than replacing words using our algorithm, we replaced them with random words from the embedding vocabulary.

Vii-D3 Experiment setup

unlike intent classification datasets such as [16] and [54], most datasets do not come with ‘slot values’ to be processed by a privacy preserving algorithm (see motivating examples in Sec. II-A). As a result, we pre-processed all the datasets to identify phrases with low transition probabilities using the privacy preserving algorithm proposed by [12].

The output from [12] yields a sequence of high frequency sub-phrases. As a result, for every query in each dataset, we are able to (inversely) select a set of low transition phrases which act as slot values to be fed into our algorithm.

For a given dataset, the output from processing each query using our algorithm is then fed into the corresponding task.

Vii-D4 Utility experiment results

We evaluated our algorithm at values of and . Words were sampled from the metric space defined by the Poincaré embeddings described in Sec V-E.

The results are presented in Tab. VI

. The evaluation metric for all tasks was accuracy on the test set. Across all the experiments, our algorithm yielded better results that just replacing with random words. In addition, and as expected, at lower values of

, we record lower values of utility across all tasks. Conversely at the higher value of , the accuracy scores get closer to the baselines. All the results illustrate the tradeoff between privacy and utility. It also shows that we can achieve tunable privacy guarantees with minimal impact on the utility of downstream ML models.

hyp- original
dataset random InferSent SkipThought fastText-BoV
MR
CR
MPQA
SST-5
TREC-6
SICK-E
MRPC
STS14
TABLE VI: Accuracy scores on classification tasks. * indicates results better than baseline, ** better than baselines

Viii Related work

There are two sets of research most similar to ours. The recent work by [23] and [22] applies -privacy to text using similar techniques to ours. However, their approach was done in Euclidean space while ours used word representations in, and noise sampled from Hyperbolic space. As a result, our approach can better preserve semantics at smaller values of by selecting hierarchical replacements.

The next set include research by [17], [46], and [3]. These all work by identifying sensitive terms in a document and replacing them by some generalization of the word. This is similar to what happens when we sample from Hyperbolic space towards the center of the Poincaré ball to select a hypernym of the current word. The difference between these and our work is the mechanism for selecting the words and the privacy model used to describe the guarantees provided.

Ix Conclusion

This paper is the first to demonstrate how hierarchical word representations in Hyperbolic space can be deployed to satisfy -privacy in the text domain. We presented a theoretical proof of the privacy guarantees in addition to defining a probability distribution for sampling privacy preserving noise from Hyperbolic space. Our experiments illustrate that our approach preserves privacy against an author attribution model and utility on several downstream models. Compared to the Euclidean baseline, we observe x greater guarantees on expected privacy against comparable worst case statistics. Our results significantly advance the study of -privacy, making generalized differential privacy with provable guarantees closer to practical deployment in the text domain.

References

  • [1] E. Agirre, C. Banea, C. Cardie, D. Cer, M. Diab, A. Gonzalez-Agirre, W. Guo, R. Mihalcea, G. Rigau, and J. Wiebe (2014) Semeval-2014 task 10: multilingual semantic textual similarity. In SemEval, Cited by: TABLE V.
  • [2] G. Alanis-Lobato, P. Mier, and M. A. Andrade-Navarro (2016) Efficient embedding of complex networks to hyperbolic space via their laplacian. Scientific reports 6, pp. 30108. Cited by: §IV-A.
  • [3] B. Anandan, C. Clifton, W. Jiang, M. Murugesan, P. Pastrana-Camacho, and L. Si (2012) T-plausibility: generalizing words to desensitize text. Trans. Data Privacy 5 (3), pp. 505–534. Cited by: §VIII.
  • [4] M. E. Andrés, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi (2013) Geo-indistinguishability: differential privacy for location-based systems. In ACM SIGSAC CCS, pp. 901–914. Cited by: §I, §I, §III-C, §V-B, §VII-C2.
  • [5] S. Argamon and P. Juola (2011) Overview of the international authorship identification competition at PAN-2011.. In CLEF, Cited by: 1st item.
  • [6] D. E. Bambauer (2013) Privacy versus security. Journal of Criminal Law & Criminology 103, pp. 667. Cited by: §II.
  • [7] V. Bindschaedler, R. Shokri, and C. A. Gunter (2017) Plausible deniability for privacy-preserving data synthesis. VLDB Endowment. Cited by: §I, §VI-B.
  • [8] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov (2017) Enriching word vectors with subword information. TACL 5. Cited by: §III-B, §VII-D2.
  • [9] M. Bun, J. Ullman, and S. Vadhan (2018) Fingerprinting codes and the price of approximate differential privacy. SIAM Journal on Computing. Cited by: §I.
  • [10] K. Chatzikokolakis, M. E. Andrés, N. E. Bordenabe, and C. Palamidessi (2013) Broadening the scope of differential privacy using metrics. In Intl. Symposium on Privacy Enhancing Technologies Symposium, Cited by: §I, §III-A, §III-A, §V-B.
  • [11] K. Chatzikokolakis, C. Palamidessi, and M. Stronati (2015) Constructing elastic distinguishability metrics for location privacy. PETS. Cited by: §I, §III-A, §VI-C.
  • [12] R. Chen, G. Acs, and C. Castelluccia (2012)

    Differentially private sequential data publication via variable-length n-grams

    .
    In SIGSAC CCS, Cited by: §VII-D3, §VII-D3.
  • [13] M. Coavoux, S. Narayan, and S. B. Cohen (2018) Privacy-preserving neural representations of text. In EMNLP, Cited by: §I.
  • [14] A. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes (2017) Supervised learning of universal sentence representations from natural language inference data. In EMNLP, Cited by: §VII-D2.
  • [15] A. Conneau and D. Kiela (2018) Senteval: an evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Cited by: §VII-D2.
  • [16] A. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, et al. (2018) Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv:1805.10190. Cited by: §II-A, §VII-D3.
  • [17] C. M. Cumby and R. Ghani (2011) A machine learning based system for semi-automatically redacting documents.. In IAAI, Cited by: §VIII.
  • [18] B. Dolan, C. Quirk, and C. Brockett (2004) Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In COLING, Cited by: TABLE V.
  • [19] C. Dwork, F. McSherry, K. Nissim, and A. Smith (2006) Calibrating noise to sensitivity in private data analysis. In TCC, Cited by: §I, §III-A.
  • [20] C. Dwork, A. Smith, T. Steinke, and J. Ullman (2017) Exposed! a survey of attacks on private data. Annual Rev. of Stats and Its Application. Cited by: §I, §I.
  • [21] Ú. Erlingsson, V. Pihur, and A. Korolova (2014) Rappor: randomized aggregatable privacy-preserving ordinal response. In SIGSAC CCS, Cited by: §I.
  • [22] N. Fernandes, M. Dras, and A. McIver (2019) Generalised differential privacy for text document processing. Principles of Security and Trust. Cited by: §III-C, §III-D, §VII-B2, §VII-B3, §VII-B, §VIII.
  • [23] O. Feyisetan, B. Balle, T. Diethe, and T. Drake (2020) Privacy- and utility-preserving textual analysis via calibrated multivariate perturbations. In ACM WSDM, Cited by: §III-C, §III-D, §V-A, §VII-C2, §VIII.
  • [24] O. Feyisetan, T. Drake, B. Balle, and T. Diethe (2019)

    Privacy-preserving active learning on sensitive data for user intent classification

    .
    arXiv preprint arXiv:1903.11112. Cited by: §I.
  • [25] O. Ganea, G. Becigneul, and T. Hofmann Hyperbolic entailment cones for learning hierarchical embeddings. In ICML, Cited by: §III-D, §IV-A.
  • [26] M. Hu and B. Liu (2004) Mining and summarizing customer reviews. In ACM SIGKDD, pp. 168–177. Cited by: TABLE V.
  • [27] P. Juola (2012) An overview of the traditional authorship attribution subtask.. In CLEF (Online Working Notes/Labs/Workshop), Cited by: 2nd item.
  • [28] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith (2011) What can we learn privately?. SIAM J. on Computing. Cited by: §I.
  • [29] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler (2015) Skip-thought vectors. In NeurIPS, Cited by: §VII-D2.
  • [30] M. Koppel, J. Schler, and S. Argamon (2011) Authorship attribution in the wild. LREC 45 (1), pp. 83–94. Cited by: §VII-B2, §VII-B3.
  • [31] D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguná (2010) Hyperbolic geometry of complex networks. Physical Review E 82 (3), pp. 036106. Cited by: §I, TABLE II, §IV.
  • [32] M. Le, S. Roller, L. Papaxanthos, D. Kiela, and M. Nickel (2019) Inferring concept hierarchies from text corpora via hyperbolic embeddings. arXiv preprint arXiv:1902.00913. Cited by: §III-D.
  • [33] X. Li and D. Roth (2002) Learning question classifiers. In COLING, Cited by: TABLE V.
  • [34] M. Marelli, S. Menini, M. Baroni, L. Bentivogli, R. Bernardi, and R. Zamparelli (2014) A sick cure for the evaluation of compositional distributional semantic models. In LREC, Cited by: TABLE V.
  • [35] E. Mathieu, C. L. Lan, C. J. Maddison, R. Tomioka, and Y. W. Teh (2019) Hierarchical representations with poincaré variational auto-encoders. arXiv preprint arXiv:1901.06033. Cited by: §V-B, §V-C, §V-D.
  • [36] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NeurIPS, Cited by: §III-B.
  • [37] M. Nickel and D. Kiela (2018) Learning continuous hierarchies in the lorentz model of hyperbolic geometry. arXiv preprint arXiv:1806.03417. Cited by: §I, §III-D, §V-A, §V-D, §V.
  • [38] M. Nickel and D. Kiela (2017) Poincaré embeddings for learning hierarchical representations. In NeurIPS, Cited by: §I, §I, §III-D, §IV-A, §V-D, §V-D, §V-E, §V.
  • [39] H. Nissenbaum (2004) Privacy as contextual integrity. Washington Law Review 79, pp. 119. Cited by: §II.
  • [40] I. Ovinnikov (2019)

    Poincaré wasserstein autoencoder

    .
    arXiv preprint arXiv:1901.01427. Cited by: §V-B, §V-C.
  • [41] B. Pang and L. Lee (2005) Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, Cited by: TABLE V.
  • [42] J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In EMNLP, Cited by: §III-B.
  • [43] A. Rényi (1961) On measures of entropy and information. Technical report HUNGARIAN ACADEMY OF SCIENCES Budapest Hungary. Cited by: §VI-A.
  • [44] C. Ruegg, M. Cuda, and J. V. Gael (2009) Distance metrics. External Links: Link Cited by: Fig. 1.
  • [45] S. Said, L. Bombrun, and Y. Berthoumieu (2014) New riemannian priors on the univariate normal model. Entropy. Cited by: §V-B.
  • [46] D. Sánchez and M. Batet (2016) C-sanitized: a privacy model for document redaction and sanitization. JAIST 67 (1), pp. 148–163. Cited by: §VIII.
  • [47] P. M. Schwartz and D. J. Solove (2011) The PII problem: privacy and a new concept of personally identifiable information. NYUL rev.. Cited by: §I, §II.
  • [48] R. Shokri and V. Shmatikov (2015)

    Privacy-preserving deep learning

    .
    In SIGSAC CCS, pp. 1310–1321. Cited by: §I.
  • [49] R. Shokri, M. Stronati, C. Song, and V. Shmatikov (2017) Membership inference attacks against machine learning models. In SP, Cited by: §I.
  • [50] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pp. 1631–1642. Cited by: TABLE V.
  • [51] C. Song and V. Shmatikov (2019) Auditing data provenance in text-generation models. In ACM SIGKDD, External Links: Link Cited by: §I.
  • [52] M. D. Staley (2015) Models of hyperbolic geometry. Cited by: Fig. 2, §IV-A.
  • [53] A. Tifrea, G. Bécigneul, and O. Ganea (2018) Poincaré glove: hyperbolic word embeddings. arXiv preprint arXiv:1810.06546. Cited by: §IV-A.
  • [54] G. Tur, D. Hakkani-Tür, and L. Heck (2010) What is left to be understood in atis?. In 2010 IEEE Spoken Language Technology Workshop, Cited by: §VII-D3.
  • [55] I. Wagner and D. Eckhoff (2018) Technical privacy metrics: a systematic survey. ACM Computing Surveys (CSUR) 51 (3), pp. 57. Cited by: §II, §III-A, §VI-A.
  • [56] T. Wang, J. Blocki, N. Li, and S. Jha (2017) Locally differentially private protocols for frequency estimation. In USENIX, pp. 729–745. Cited by: §I.
  • [57] B. Weggenmann and F. Kerschbaum (2018) SynTF: synthetic and differentially private term frequency vectors for privacy-preserving text mining. In ACM SIGIR, Cited by: §I.
  • [58] J. Wiebe, T. Wilson, and C. Cardie (2005) Annotating expressions of opinions and emotions in language. LREC. Cited by: TABLE V.