Generating Optimal Privacy-Protection Mechanisms via Machine Learning

04/01/2019
by   Marco Romanelli, et al.
Inria
0

We consider the problem of obfuscating sensitive information while preserving utility. Given that an analytical solution is often not feasible because of un-scalability and because the background knowledge may be too complicated to determine, we propose an approach based on machine learning, inspired by the GANs (Generative Adversarial Networks) paradigm. The idea is to set up two nets: the generator, that tries to produce an optimal obfuscation mechanism to protect the data, and the classifier, that tries to de-obfuscate the data. By letting the two nets compete against each other, the mechanism improves its degree of protection, until an equilibrium is reached. We apply our method to the case of location privacy, and we perform experiments on synthetic data and on real data from the Gowalla dataset. We evaluate the privacy of the mechanism not only by its capacity to defeat the classificator, but also in terms of the Bayes error, which represents the strongest possible adversary. We compare the privacy-utility tradeoff of our method with that of the planar Laplace mechanism used in geo-indistinguishability, showing favorable results.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

12/03/2018

Local Obfuscation Mechanisms for Hiding Probability Distributions

We introduce a formal model for the information leakage of probability d...
01/08/2020

Local Information Privacy and Its Application to Privacy-Preserving Data Aggregation

In this paper, we study local information privacy (LIP), and design LIP ...
12/19/2017

Privacy-Preserving Adversarial Networks

We propose a data-driven framework for optimizing privacy-preserving dat...
10/26/2017

Context-Aware Generative Adversarial Privacy

Preserving the utility of published datasets while simultaneously provid...
07/13/2018

Generative Adversarial Privacy

We present a data-driven framework called generative adversarial privacy...
06/21/2022

Three-way optimization of privacy and utility of location data

With the recent bloom of data and the drive towards an information-based...
12/26/2018

Protecting Sensitive Attributes via Generative Adversarial Networks

Recent advances in computing have allowed for the possibility to collect...

I Introduction

Big data are the lifeblood of modern economy and, consequently, there is an enormous interest in collecting all sort of personal information. Individuals, on the other hand, are often willing to provide their data in exchange of improved services and experiences. However, with big data comes big concern about privacy: a person who makes a daily use of connected devices and social media may disclose a huge amount of detailed and accurate information, which could later on be used against him or her. It could affect everything from relationships to getting a job, or qualifying for a loan, or worse.

The rise of machine learning, with its capability of performing powerful analytics on massive amounts of data, has further exacerbated the threaths to privacy. To illustrate the point, consider the following scenario: some users send their identities and coordinates to a Location Based Service (LBS) to obtain some kind of assistance, for instance the points of interest (POIs) near them. An attacker that has access to the LBS could collect the traces of these users for a while, and use machine learning techniques and some background knowledge (for instance the home address of the users) to infer information from them. For example, he could train a machine to classify traces, to associate a class to a home address and therefore to a user, and also to connect a trace outset to its possible continuations. Later on, the attacker could use the machine to identify the user from a new trace (even if the trace does not contain the home address), or to predict the likely next location the user will visit.

Nonetheless, if machine learning can be a threat, it can also be a powerful mean to build good privacy protection mechanisms. We envision the possibility of deploying a machine that counters the attack, and that is able to learn the best defense strategy by interacting with the adversary.

In this paper we focus on defense mechanisms based on data obfuscation by addition of controlled noise. Now, clearly more noise means more privacy, but it is important to note that, in general, privacy is not the only concern: a good defense must not only prevent the attacker from discovering sensitive information, but also disclose what is needed to get the desired quality of service. In the example above, the user may report to the LBS an obfuscated location, but still he or she expects a service in return, and the quality of service (QoS) usually degrades with the amount of obfuscation. For instance, reporting a fixed location, or a randomly chosen one, would guarantee privacy, but would result in a very low utility, because the obtained POIs would be close to the reported location, which would usually be far from the real one.

The trade-off between privacy and utility is one of the main challenges in design of privacy mechanisms. In this paper, following the approach of [1], we aim at maximizing the privacy protection while preserving the desired QoS111Other approaches to location privacy take the opposite view, and aim at maximizing utility while achieving the desired amount of privacy, see for instance [2]., i.e. the expected utility loss must remain below some tolerance threshold with respect to a chosen metric. We focus on the example of location privacy and in particular on the issue of re-identification of the user from his location, but the framework that we develop is general and can be applied to any situation in which an attacker might infer sensitive information from accessible correlated data.

The optimal trade-off between privacy and utility can in principles be achieved with linear programming. This problem can in fact be thought of as a

linear optimization problem where the privacy is the objective function and the threshold on utility is the constraint. The limitation of this approach, however, is that it does not scale to large datasets: the existing tools cannot handle more than a few hundreds possible locations. Furthermore, the background knowledge and the correlation between data points affect privacy and they are usually difficult to determine and express formally.

Our position is that machine learning can be the solution to this problem. Inspired by the GANs paradigm [3]

, we propose a system consisting of two neural networks: a generator

and a classifier . The idea is that generates noise so to confuse the adversary as much as possible, within the boundaries allowed by the utility requirement, while inputs the noisy locations produced by and tries to learn how to re-identify the corresponding user. In other words, tries to build a classification function, where the obfuscated locations are the features, and the users’ ids are the labels. The classification produced by is then fed back to , which uses it to regulate the noise injection. While fighting against , learns to produce a more and more “clever” noise function, until it reaches a point where it cannot improve any longer.

A main difference between our approach and the GANs is that the GANs have access to a dataset sampled from the distribution that should learn to reproduce. The adversary, which in that case is called (discriminator), tries to distinguish between the real data (from the dataset) and those generated by . The net then “learns” the target distribution from the feedback provided by . In our case, on the contrary, there is no dataset of samples that can “direct” towards an optimal noise distribution. In a sense, our has to be more “creative” and “invent” a good distribution from scratch.

A major challenge in our setting is represented by the choice of the target function. To illustrate the issue, let us explain in more detail the ideas behind our approach. The goal of is to achieve the optimal privacy-utility tradeoff. Concerning utility, we can formalize it as the expected distance between the real location and the obfuscated one222Such expected distance is known as distortion in information theory [4].. This is a typical definition in location privacy (see for instance [1, 5, 2, 6]), and captures the fact that location based services usually offer a better quality of service when they are provided with a more accurate location. Concerning privacy, a first idea would be to measure it in terms of ’s misclassification, since we are trying to prevent the attacker from associating a location to the corresponding user. Following this idea, we could define the privacy loss

as the expected success probability of

’s classification, or, alternatively, as the similarity between ’s classification and the real one (which can be formalized in terms of cross entropy).

We assume that users obfuscate their locations within the limits imposed by the utility constraint, and knowing that they will be observed by the attacker. About the attacker we assume that he can learn the obfuscation strategy deployed by the users, and will use it to improve the precision of the re-identification. This interplay between users and attacker can be seen as an instance of a zero-sum Stackelberg game [1], and modeled as the GANs mentioned above, where the users are represented by (the leader), the attacker by (the follower), and the payoff function is the privacy loss described above. From a formal point of view, finding the optimal point of equilibrium between and corresponds to solving a minimax problem on ( is the minimizer and the maximizer). The function can be proved to be convex/concave with respect to the strategies of (obfuscation) and

(re-identification) respectively, so from standard game theory we know that there is a saddle point and that it corresponds to the optimal obfuscation-re-identification pair.

The problem, however, is that it is difficult to reach the saddle point via the typical back-and-forth interplay between the two nets. Let us clarify this point with a simple example:

Example 1.

Consider two users, Alice and Bob, in locations and respectively. Assume that the first attempt of is to report their true locations (no noise). Then will learn that corresponds to and to . At the next round, will figure out that its best strategy to maximize the misclassification error (given the choices of ) is to swap the locations, so that reports and reports . Then, on its turn, will have to “unlearn” the previous classification and learn the new one. But then, at the next round, will again swap the locations, and bring the situation back to the starting point, and so on, without ever reaching an equilibrium. Note that the strategy of in the equilibrium point would be the mixed strategy that reports for both and  333There are two more equilibrium points: one is when both and report or with uniform probability, the other is when they both report . All the three strategies give the same payoff and we could choose any of them to illustrate the issue. (in this situation would not be able to do better than random guessing), but may not stop there. The problem is that it is difficult to calibrate the training of so that it stops in proximity of the saddle point rather than continuing all the way to reach its relative optimum. The situation is illustrated in Fig.1. The payoff function considered in this figure is the success probability of the classification, but it would be analogous if we considered, for instance, the cross entropy between the true ids and ’s prediction.

Expected success probability of the classification.

               
               
               
Bold: .    : .
Fig. 1: Payoff tables of the games between and in Example 1, for various payoff functions . stands for and for .

In order to address this issue, we propose to adopt a different target function, which is less sensitive to the particular labeling strategy of . The idea is to consider not just the precision of the classification, but, rather, the information contained in it, which represents the potential precision of an ideal classifier that uses that information in the optimal way. There are two main ways of formalizing this intuition: the mutual information and the complement of the Bayes error. More precisely, let

is the random variable associated to the true ids, and

be the random variable associated to the ids resulting from the classification (predicted ids). We consider the mutual information between and , denoted by , which is defined as , where is the entropy of and is the residual entropy of given . The Bayes error, which we will denote by , is defined as the expected probability of error under the MAP rule, which selects the value of X having maximum aposteriori probability given the value of . Its complement , also known as (posterior) Bayes vulnerability [7], represents then the precision of an ideal classifier that makes the best possible guess about the true id from the classification produced by .

If we set to be or , we obtain the payoff table illustrated in Fig.1. We note that the mimimum in the first and last columns corresponds now to a point of equilibrium for any choice of . This is not always the case, but in general it is closer to the point of equilibrium and makes the training of more stable, in the sense that training for a longer time does not risk to increase the distance from the equilibrium point.

In this paper we will use the mutual information to generate the noise, but we will evaluate the level of privacy of the resulting mechanism also in terms of the Bayes error. Both notions have been used in the literature as privacy measures, for instance mutual information has been applied to quantify anonymity [8, 9]. The Bayes error and the Bayes vulnerability have been considered in [9, 10, 11, 7], and they are strictly related to the min-entropy leakage [12]. Mutual information and Bayes error are related to each other by the Santhi-Vardy bound [13]:

Another popular notion of privacy is differential privacy [14]. Its relation with mutual information has been explored in [15, 16], while its relation with the Bayes vulnerability has been investigated in [17].

I-a Contribution

The contributions of the paper are the following:

  • We propose an approach based on adversarial nets to generate obfuscation mechanisms with a good tradeoff between privacy and utility. The advantage of our method with respect to the analytical methods from the literature is twofold:

    • we can handle much larger datasets, and

    • we can handle complicated background knowledge and correlation between data points.

  • Although our approach is inspired by the GANs paradigm, it departs significantly from it. In particular, in our case, the distribution has to be “invented” rather than “imitated”. This implies that we have to come up with different techniques for evaluating a distribution. To achieve our goal, we propose a new method based on the mutual information between the supervised and the predicted class labels.

  • We apply our method to the case of location privacy. We craft some experiments to shows in detail how our approach works and how it can achieve optimality. Depending on the distribution of data and on the utility constraint, the resulting noise function may achieve the maximal privacy, i.e., equivalent to that of random guessing. Trivially, in this case it also achieves the optimal tradeoff between privacy and utility.

  • We evaluate the obfuscation mechanism produced by our method on real location data from the the Gowalla dataset, and compare the privacy and utility of our approach with that of the planar Laplacian, one of the most popular mechanisms used in location privacy. First we do the comparison using the adversarial classifier in our architecture, obtaining favorable results. Then, we confirm these results (and hence the advantages of our method) by means of the (ideal) optimal Bayesian classifier, which represents the strongest possible adversary.

  • We have made publicly available the code of the implementation and the experiments at the URL https://gitlab.com/MIPAN/mipan.

I-B Related work

Optimal mechanisms, namely mechanisms providing an optimal compromise between utility and privacy, have attracted the interest of many researchers. Many of the studies so far have focused on analytic methods based on linear optimization [1, 2, 18, 19]. Although they can provide exact solutions, the high complexity of linear optimization limits the scalability of these methods. Our approach, in contrast, using the highly-efficient optimization process of neural networks (the gradient descent), does not suffer from this drawback.

The paper that is closest to ours is [20]. Its authors also propose a GAN-based method to construct mechanisms providing an optimal privacy-utility trade-off, and they consider notions of privacy and utility similar to ours. The main difference is that the target function they use in the GAN is the cross entropy rather than the mutual information (they consider mutual information to measure privacy, but in the implementation they approximate it with the log loss, that is the expected cross entropy). Hence the convergence of their method may be problematic, at least when applied to our case study, due to the “swapping effect” described in Example 1. We have actually experimented with use cross entropy as target function on our examples in Section  IV, and the results were unsatisfactory, in the sense that due to the lack of convergence of the resulting mechanisms were unstable and the level of privacy protection was poor.

One of the side contributions of our paper is a method to compute mutual information in neural network (cfr. Section III

). Recently, Belghazi et al. have proposed MINE, an efficient method to neural estimation of mutual information 

[21], inspired by the framework of [22] for the estimation of a general class of functions representable as

-divergencies. These methods work also in the continuous case and for high-dimensional data. In our case, however, we are dealing with a discrete domain, and we can compute directly and

exactly

the mutual information. Another reason for developing our own method is that we need to deal with a loss functions that contain not only the mutual information, but also a component representing utility, and depending on the notion of utility the result may not be an

-divergence.

Our paradigm has been inspired by the GANs [3], but it comes with some fundamental differences:

  • is a classifier performing re-identification while in the GANs there is a discriminator able to distinguish a real data distribution from a generated one;

  • in the GANs paradigm the generator network tries reproduce the original data distribution to fool the discriminator. A huge difference is that, in our adversarial scenario, does not have a model distribution to refer to. The final data distribution only depends on the evolution of the two networks over time and it is driven by the constraints imposed in the loss functions that rule the learning process.

  • we still adopt a training algorithm which alternates epochs of training of

    and epochs of training of , but as we will show in Section III, it is different from the one adopted for GANs.

Ii Our setting

Symbol Description
15cmClassifier network (attacker).
15cmGenerator network.
15cm Sensitive information. (Random var. and domain.)
15cmUseful information with respect to
the intended notion of utility.
15cmObfuscated information accessible
to the service provider and to the attacker.
15cmInformation inferred by the attacker.
15cmJoint probability of two random variables.
15cmConditional probability.
15cmObfuscation mechanism.
15cmBayes error.
15cmUtility loss induced by the obfuscation mechanism.
15cmThreshold on the utility loss.
15cmEntropy of a random variable.
15cmConditional entropy.
15cmMutual information between two random variables.
TABLE I: Table of symbols

We formulate the privacy-utility optimization problem using a framework similar to that of [23]. We consider four random variables, , ranging over the sets and respectively, with the following meaning:

  • : the sensitive information that the users wishes to conceal,

  • : the useful information with respect to some service provider and to the intended notion of utility,

  • : the information made visible to the service provider, which may be intercepted by some attacker, and

  • : the information inferred by the attacker.

We assume a fixed joint distribution (

data model) over the users’ data

. We present our framework assuming that the variables are discrete, but all results and definitions can be transferred to the continuous case, by replacing the distributions with probability density functions, and the summations with integrals. For the initial definitions and results of this section

and may be different sets. Starting from Section III we will assume that .

An obfuscation mechanism can be represented as a conditional probability distribution

, where indicates the probability that the mechanism transform the data point into the noisy data point . We assume that are the only attributes visible to the attacker and to the service provider. The goal of the defender is to optimize the data release mechanism so to achieve a desired level of utility while minimizing the leakage of the sensitive attributes . The goal of the attacker is to retrieve from as precisely as possible. In doing so, it produces a classification (prediction).

Note that the four random variables form a Markov chain:

(1)

Their joint distribution is completely determined by the data model, the obfuscation mechanism and the classification:

From we can derive the marginals, the conditional probabilities of any two variables, etc. For instance:

(2)
(3)
(4)
(5)

The latter distribution, , is the posterior distribution of given , and will play an important role in the following sections.

Ii-a Quantifying utility

Concerning the utility, we consider a loss function , where represents the utility loss caused by reporting when the true value is .

Definition 1 (Utility loss).

The utility loss from the original data to the noisy data , given the loss function , is defined as the expectation of :

(6)

We will omit when it is clear from the context. Note that, given a data model , the utility loss can be expressed in terms of the mechanism :

(7)

Our goal is to build a privacy-protection mechanism that keeps the loss below a certain threshold . We denote by the set of such mechanisms, namely:

(8)

The following property is immediate:

Proposition 1 (Convexity of ).

The set is convex and closed.

Ii-B Quantifying privacy as mutual information

As explained in the introduction, the privacy leakage with respect to an attacker will be quantified by the mutual information , which represents the correlation between and the classification produced by which the obfuscation mechanism created by tries to minimize. Let us remark once more that considering the mutual information instead than the classification precision allows us to capture the amount of information provided by about , and therefore the largest possible class of adversaries given the information learned by , not just . As pointed out in the introduction, this choice makes the training of more stable, since it is learning to defeat not only but all the adversaries that can be derived from the information available through . In other words, cannot simply try to win the game by swapping around the labels of the classification learned by : it needs to reduce the information that reaches .

To this purpose, we consider the following information-theoretic functions:

Entropy
(9)
Residual Entropy
(10)
Mutual Information
(11)

Ii-C Formulation of the game and equilibrium strategies

The game that and play corresponds to the following minimax formulation:

(12)

where the minimization by is on the mechanisms ranging over , while the maximization by is on the classifications .

Note that

can be seen as a stochastic matrix and therefore as an element of a vector space. An important property for our purposes is that the mutual information is convex with respect to

:

Proposition 2 (Convexity of ).

Given and , let . Then is convex. Namely, given a pair of convex coefficients and , and two mechanisms and , we have:

(13)

Proposition 1 and 2 show that this problem is well defined: for any choice of , has a global minimum in , and no strictly-local minima.

We note that, in principle, one could avoid using the GAN paradigm, and try to achieve the optimal mechanism by solving, instead, the following minimization problem:

(14)

where is meant, as before, as a minimization over the mechanisms ranging over . This approach would have the advantage that it is independent from the attacker, so one would need to reason only about (and there would be no need for a GAN).

The main difference between and is that the latter represents the information about available to any adversary, not only those that are trying to retrieve by building a classifier. This fact reflects in the following relation between the two formulations:

Proposition 3.

Note that, since is an upper bound of our target, it imposes a limit on .

On the other hand, there are some advantages in considering instead than : first of all, may have a much larger and more complicated domain than , so performing the gradient descent on could be unfeasible. Second, if we are interested in considering only classification-based attacks, then should give a better result than . In this paper we focus on the former, and leave the exploration of an approach based on as future work.

Ii-D Measuring privacy as Bayes error

As explained in the introduction, we intend to evaluate the resulting mechanism also in terms of Bayes error. Here we give the relevant definitions and properties.

Definition 2 (Bayes error).

The Bayes error of given is:

Namely, the Bayes error is the expected probability of “guessing the wrong id” of an adversary that, when he sees that produces the id , it guesses the id

that has the highest posterior probability given

.

The definition of is analogous. Given a mechanism , we can regard as a measure of privacy of w.r.t. one-try [12] classification-based attacks, whereas measures privacy w.r.t. any one-try attack. The following proposition shows the relation between the two notions.

Proposition 4.

Next, we propose an implementation via GANs of the method illustrated above.

Iii Implementation in Neural Networks

In this section we illustrate the network architecture and the training algorithm that we propose to implement the interplay between and . The scheme of our adversarial game is illustrated in Fig. 2, where the meaning of the symbols is as follows:

final

Fig. 2: Scheme of the adversarial nets for our setting.
  • and are instances of the random variables ,, and respectively, whose meaning is described in previous section. In this section, and in the rest of teh paper, we assume that the domains of and coincide, namely .

  • (seed) is a randomly-generated number in .

  • is the function learnt by , and it represents an obfuscation mechanism . The input provides the randomness needed to generate random noise. It is necessary because a neural network in itself is deterministic.

  • is the classification learnt by , and it corresponds to .

[width=0.45] Data: // Training data
Models: generator evolution at the –th step;
classifier evolution at the –th step.
trains the network on the data .
outputs a noisy version of .
classifier = load model =
generator = load model =
i = 0
while True do
        // Let be the mechanism produced by the current generator.
       
       i += 1
       classifier = train(classifier, generator())
        =
        // Train the classifier to produce a that approximates .
       
       adv network = generator + classifier in cascade
       adv network = train(adv network, )
        // Train the network to produce a new by reducing . The weights of the classifier are frozen during this phase.
       
       generator = generator layer in adv network =
        // Save generator evolution for next iteration.
       
       classifier = load model
        // Reset classifier.
       
end while
Algorithm 1 Adversarial algorithm with classifier reset.


The evolution of the adversarial network is described in Algorithm 1. and

are trained at two different moments within the same adversarial training iteration. In particular

is obtained by training the network against the noise generated by and is obtained by fighting against .

Note that in our method each is trained only on the output from . This is a main difference with respect to the GANs paradigm, where the discriminator is always trained both over the output of the generator and the target distribution. Another particularity of our method is that at the end of the ith iteration, while is retained for the next iteration, is discarded and the classifier is reinitialized to the base one . The reason is that we have found out experimentally that, if is not reset, it might take a long time before it becomes able to contrast the noise injection at iteration (i+1)-th. In fact has been trained on data generated by and it has consistently updated its weights according to that. It could take a long time for to “forget” what it has learned and move on to learning to beat the new noise distribution. On the contrary, when the classifier is reset to , it quickly adapts to the data produced by the new generator.

We describe now in more detail some key implementation choices of our proposal.

Iii-a Implementation details: Base models

The base model is simply the “blank” classifier that has not learnt anything yet. As for , we have found out experimentally that it is convenient to start with a noise function pretty much spread out. This is because in this way the generator has more data points with non-null probability to consider, and can figure out faster which way to go to minimize the mutual information.

Iii-B Implementation details: Utility

The utility constraint is incorporated in the loss function of in the following way:

(15)

where and are parameters that allow us to tune the trade-off between utility and privacy. The purpose of is to ensure that the constraint on utility is respected, i.e., that the obfuscation mechanism that is trying to produce stays within the domain . We recall that represents the constraint (cfr. (8)). Since we need to compute the gradient on the loss, we need a derivable function for . We propose to implement it using , which is a function of two arguments in defined as:

(16)

This function is non negative, monotonically increasing, and its value is close to for , while it grows very quickly for . Hence, we define

(17)

With this definition, does not interfere with when the constraint is respected, and it forces to stay within the constraint because its growth when the constraints is not respected is very steep.

Iii-C Implementation details: Mutual Information

One of the characteristics of our work is that the loss function is based on the mutual information between the supervised and the predicted class labels. We show how it is computed via an example. Assume we are at the -th iteration. Let us consider the scenario illustrated in Table II, that describes a classification problem over three classes A, B, and C, and six samples. The and columns represent the true values of the data, and the column represents the obfuscated version of , generated by . The columns represent the labels predicted by on the Z values of the samples. These are the result of the training of the classifier on the output of , the generator at the previous iteration. The numbers in the

columns represent the confidence of the prediction (as produced by the softmax activation functions), and can be interpreted as probabilities.

In order to compute the mutual information we need to calculate and , using their defining formulae (9) and (10), and then apply (11). To this purpose, we just need to find out the joint distribution , since the marginals and , and the conditional probability , can be then obtained from in the standard way. Now, the value of can be computed as the average of the probabilities over the samples. Specifically, we obtain: , , , , , , and .

A
A
B
B
C
C
TABLE II: Classification problem with three classes A, B, C.

In order to make the function work with the back propagation algorithm, we implemented all the steps using TensorFlow and Keras native functions: summation, multiplication, logarithm, condition, while loop.

Iii-D Implementation details: Metrics to evaluate the classification outcome

To evaluate the quality of the classification produced by the C network, we rely on two metrics: accuracy and F1_score.

To explain their meaning, let us consider a three class problem and the corresponding confusion matrix in table 

III, where the main diagonal represents the number of matches between the true labels and the predicted ones, while all the other cells represent the mismatches.

Predicted labels
Class 0 Class 1 Class 2

True labels

Class 0 00 01 02
Class 1 10 11 12
Class 2 20 21 22
TABLE III: Confusion matrix for a 3-class classification pb.

The accuracy444For simplicity we give the definition for the case in which the elements of the dataset are equally distributed among all the classes. If this is not the case, the definition of accuracy is more complicated. is:

(18)

The F1_score involves the notion of precision and recall. These are defined on each class:

(19)
(20)
(21)

The global is defined as the weighted average of the ’s for each class. In the simplest case, when the classes in the dataset are balanced, we can define it as the simple average. In the above example it would be:

(22)

In case the classes are unbalanced more elaborated definitions should be applied. Typically, we could assign to each class a weight inversely proportional to the frequency of the class.

Both the accuracy and the F1_score take values . It is important to consider both of them to avoid issues like the accuracy paradox: a classifier always predicting the most frequent class could be accurate but it could totally neglect those classes which are not very frequent. Furthermore, even in case of balanced datasets, considering the F1_score, and therefore the precision and recall, may be useful in case of low accuracy, to understand if all the classes are misclassified or some are wrongly predicted more often than others.

Iii-E Implementation details: number of epochs and batch size

The convergence of the game can be quite sensitive to the number of epochs and batch size. We refer to literature [24] for a general discussion about the impact they have on learning. It is important to note that:

  • Choosing a batch size too small for training might result in a constraint on the utility too strict. In fact, since the utility loss is an expectation, a larger number of samples makes it more likely that some points are pushed further than the threshold, taking advantage of the fact that their loss may be compensated by other data points for which the loss is small.

  • Training for too few epochs might result into a too weak adversary. On the other hand if it is trained for a long time we should make sure that the classification performances do not drop over the validation and test set because that might indicate an overfitting problem.

Iii-F On the convergence of our method

In principle, at a each iteration , our method relies on the ability of the network to improve the obfuscation mechanism starting from the one produced by , and given only the original locations and the model , which are used to determine the direction of the gradient for . The classifier is a particular adversary modeled by its weights and its biases. However, thanks to the fact that the main component of is and not the the cross entropy, takes into account all the attacks that would be possible from ’s information. We have experimentally verified that indeed, using the mutual information rather than the cross entropy, determines a substantial improvement on the convergence process, and the resulting mechanisms provide a better privacy (for the same utility level). Again, the reason is that the the cross entropy would be subject to the “swapping effect” illustrated by Example 1 in the introduction.

Another improvement on the convergence is due the fact that we reset the classifier to the initial weight setting () at each iteration, instead than letting evolve from . We have experimentally verified that the precision of improves more rapidly if we start from an “agnostic” situation rather than from the knowledge accumulated in previous steps. This is, intuitively, due to the fact that the noise produced by may change substantially from an iteration to the next, hence would have to “unlearn” the now obsolete information of , if the latter were its starting point.

The loss function that the network has to minimize, , is a combination of a softplus for the utility loss (which is convex) and the mutual information which is convex in , i.e., our obfuscation mechanism. This means that there are only global minima, although there can be many of them, all equivalent. Therefore for sufficiently small updates, the noise distribution modeled by converges to one of these optima, provided that the involved network has enough capacity to compute the gradient descent involved in the training algorithm. In practice, however, the network represents a limited family of noise distributions, and instead of optimizing the noise distribution itself we optimize the weights of this network, which introduces multiple critical points in the parameter space.

Iv Experiments on synthetic data

In this section we illustrate some experiments on artificial data to show how the proposed method works. We consider the issue of location privacy, namely, we assume a set of users who want to protect their identities, but need to disclose (an obfuscated version of) their locations, for some utility purpose. Following Definition 6, the utility loss is the expected distortion introduced by the obfuscation mechanism, i.e., the expected distance between the true location and the obfuscated one. To keep things simple, we consider just four users. In order to be consistent with the experiments on real data, we assume the same set of locations of the next section, namely locations in an area of Paris.

More precisely, with reference to the setting in Section II, we instantiate the domains of the random variables , and , and the loss function, as follows:

  • , the identities of users.

  • the locations in a squared region of sq km centered in 5, Boulevard de Sébastopol, Paris. Each location entry is defined by a pair of coordinates normalized in .

  • geographical distance555For simplicity in this paper we use the Euclidean distance rather than the Haversine distance. For such limited area they are quite close anyway. between and .

The goal is to produce an obfuscation mechanism that achieves a good trade-off between utility and privacy. Though the various experiments, we will compare our method with that of the planar Laplace mechanism associated to geo-indistinguishability [5], which is rather popular in the domain of location privacy.

Iv-a The planar Laplace mechanism

We recall that the planar Laplace probability density function in the point , given that the true location is , is defined as [5]:

(23)

where is the Euclidean distance between and .

In order to compare the the above mechanism with our one, we need to tune the privacy parameter so that has the same expected distortion as the upper bound to the utility loss applied in our method, and then confront the privacy degree of the two corresponding mechanisms. To this purpose, we recall that the expected distortion introduced by the planar Laplace depends only on (not on the prior ), and it is given by:

(24)

Iv-B Bayes error estimation

As explained in Section II, besides we will also use the Bayes error to evaluate the level of protection offered by a mechanism.

To this purpose, we discretize by creating a grid over the sq km region, thus determining a partition of the region into a number of disjoint cells. We will create different grid settings to see how the partition affects the Bayes error. In particular, we will consider the cases where the side of a cell is m, m, m and m long, which corresponds to , , and cells, respectively.

We will also run various experiments with different numbers of obfuscated locations (hits) created for each original one. Specifically, for each grid we will consider the cases of , , and obfuscated hits for each original one.

Each hit falls in exactly one cell. Hence, we can estimate the probability that a hit is in cell as:

(25)

and the probability that a hit in cell belong to class as:

(26)

We can now estimate of the Bayes error as follows:

(27)

where is the total number of cells.

It is important to stress the fact that the results of these computations are influenced by the chosen grid. In particular we have two extreme cases:

  • when the grid consists of only one cell the Bayesian error is for any obfuscation mechanism .

  • when the number of cells is large enough so that each cell contains at most one hit, then Bayesian error is for any obfuscation mechanism.

In general, a coarser granularity is a source of additional Bayes error, independently from the obfuscation mechanisms. A finer granularity guarantees higher discrimination power, especially when we compare methods which scatter the obfuscated locations in different regions of the domain.

Iv-C The synthetic dataset

The synthetic dataset consists of locations (true locations) for each of the users (classes) and . Hence we are in a situation of balanced classes, and in total we have entries. The locations of each user are placed around one of the vertices of a square of sq meters centered in , Boulevard de Sébastopol, Paris. (Each user corresponds to a different vertex.) They are randomly generated so to form a cloud of entries around each vertex and in such a way that no locations falls further than about m from the corresponding vertex. Note that these distributions determine the random variables and , and their correlation.

For each user, locations will be used for training and validation purposes, whilst will be used for testing (). Therefore there are locations in the set used for training and validation, and for testing. These sets are represented in Fig. 13(b) and Fig. 3(b), respectively.

Iv-D Experiment 1: Synthetic data, relaxed utility constraint

As a first experiment, we choose for the upper bound on the expected distortion a value high enough so that in principle we can achieve the highest possible privacy, which is obtained when the obfuscated location observed by the attacker gives no information about the true location, which means that . When this is the case, the best the attacker can do is random guessing. Since we have users, the Bayes risk is .

One way to achieve the maximum privacy is to map all the locations into the same obfuscated location in the middle. To compute a sufficient , consider that the vertices where the locations of the users are placed form a square of side m, hence each vertex is at a distance m from the center. Taking into account that the locations can be as much as m away from the corresponding vertex, we conclude that any value of larger than m should be enough to allow us to obtain the maximum privacy. Just to be sure, we set the upper bound on the distortion to be

(28)

but we will see from the experiments that a much smaller value of would have been sufficient.

We now need to tune the planar Laplace so that the expected distortion is at least . We decide to set:

(29)

which, using Equation (24), gives us a value

(30)

We have actually used this instance of the planar Laplace also as a starting point of our method: we have defined as with . For the following steps, the networks and are constructed as explained in Algorithm 1. In particular, we train the generator with a batch size of samples for epochs during each iteration. The learning rate is set to . For this particular experiment we set the weight for the utility loss to and the weight for the mutual information to . The classifier is trained with a batch size of samples and epochs for each iteration. The learning rate for the classifier is set to .

(a)

(b)

(c)
Fig. 3: Synthetic testing data. From left to right: Laplacian noise, no noise, our noise. m.
Number of cells
(a) Training data.
Number of cells
(b) Testing data.
Fig. 4: Estimation of on the original version of the synthetic data.
Number of cells
Obf Lap Our Lap Our Lap Our Lap Our
(a) Training data.
Number of cells
Obf Lap Our Lap Our Lap Our Lap Our
(b) Testing data.
Fig. 5: Estimation of on the synthetic data for the Laplacian and for our mechanisms, with m. The empirical utility loss is m for the Laplacian and m for ours.

Around iteration the accuracy of the network tends to a value which is both over the training and the validation set. This means that just randomly predicts one of the four classes. This happens because the noise injection results to be effective, and the cannot learn anything from the training set. Since is the maximum possible Bayes risk, we then know that we can stop.

The result of the application of the planar Laplace to the testing set is illustrated in Fig. 3(a). In the appendix we report also the result on the testing set (Fig. 13(a)). The empirical distortion (i.e., the distortion computed on the sampled obfuscated locations) is m for the training data and m for the testing data, respectively, which is in line with the theoretical distortion formulated in (30).

The result of the application of our method, i.e. the final generator , to the testing and training sets, are reported in Fig. 3(c) and 13(c), respectively. The empirical distortion for the training and testing sets is m and m, respectively. This is way below the limit of m set in (28), and it is due to the fact that to achieve the optimum privacy we probably do not need more than m. In fact, the distance of the vertices from the center is m, and even though some locations are further away (up to m more), there are also locations that are closer, and that compensate the utility loss (which is a linear average measure).

From Fig. 3 and  13 we can see that, while the Laplace tends to “spread out” the obfuscated locations, our method tends to concentrate them into a small area. This may not be possibile in general, as it depends on the utility constraint. Nevertheless, we can expect that our mechanism will tend to overlap the obfuscated locations of different classes, as much as allowed by the utility constraint. With the Laplace, on the contrary, the areas of the various classes remain pretty separated. This is reflected by the Bayes error estimation reported in Fig. 5.

We note that the Bayes error of the planar Laplace tend to decrease as the grid becomes finer. We believe that this is due to the fact that, with a coarse grid, there is an effect of confusion simply due to the large size of each cell. We remark that the behavior of our noise, on the contrary, is quite stable. It is also interesting to note that, when the grid is very coarse ( cells) the Bayes error is already on the original data (cfr. Fig. 4), which must be due to the fact that all the vertices are in the same cell. While the Bayes error remains also with our obfuscation mechanism, with the Laplacian it decreases to . The reason is that the noise scatters the locations in different cells, and they become, therefore distinguishable.

(a)

(b)

(c)
Fig. 6: Synthetic testing data. From left to right: Laplacian noise, no noise, our noise. m.
Number of cells
Obf Lap Our Lap Our Lap Our Lap Our
(a) Training data.
Number of cells