Differentially Private Obfuscation Mechanisms for Hiding Probability Distributions

12/03/2018 ∙ by Yusuke Kawamoto, et al. ∙ 0

We propose a formal model for the privacy of user attributes in terms of differential privacy. In particular, we introduce a notion, called distribution privacy, as the differential privacy for probability distributions. Roughly, a local obfuscation mechanism with distribution privacy perturbs each single input so that the attacker cannot significantly gain any information on the probability distribution of inputs by observing an output of the mechanism. Then we show that existing local obfuscation mechanisms have a limited effect on distribution privacy. For instance, we prove that, to provide distribution privacy w.r.t. the approximate max-divergence (resp. f-divergence), the amount of noise added by the Laplace mechanism should be proportional to the infinite Wasserstein (resp. the Earth mover's) distance between the two distributions we want to make indistinguishable. To provide a stronger level of distribution privacy, we introduce an obfuscation mechanism, called the tupling mechanism, that perturbs a given input and adds random dummy data. Then we apply the tupling mechanism to the protection of user attributes in location based services, and demonstrate by experiments that the tupling mechanism outperforms the popular local (extended) differentially private mechanisms in terms of distribution privacy and utility. Finally, we discuss the relationships among utility, privacy, and the cost of adding dummy data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Differential privacy [1, 2]

is a quantitative notion of privacy that has been applied to a wide range of areas, including databases, geo-location, social network, and machine learning. The protection of differential privacy can be achieved by adding controlled noise to a given data that we wish to hide. In particular, previous studies have focused on

local obfuscation mechanisms, namely, randomized algorithms that perturb each single “point” data (e.g., a geo-location point) by adding certain probabilistic noise before sending it out to a data collector. However, the obfuscation of a probability distribution of points (e.g., a distribution of locations of male/female users) still remains to be investigated in terms of differential privacy.

For example, a location-based service (LBS) collects each user’s geo-location data to provide a service (e.g., navigation, resource-trucking, or recommendation), and has been widely studied in terms of the privacy of user location information. As shown in previous work [3, 4], users can hide their accurate locations by sending to the LBS provider only approximate location information calculated by an obfuscation mechanism.

Nevertheless, a user’s location information can be used for an attacker to infer the user’s attributes (e.g., age, gender, and residence area) or activities (e.g., working, sleeping, and shopping) [5, 6, 7]. For example, when an attacker knows the distribution of locations of the male/female users, then he may be able to detect whether given users are male or female after observing their locations. However, the protection of such attributes in terms of differential privacy has not been addressed in the literature. In fact, to the best of our knowledge, no local mechanisms have been studied on the capability of obfuscating a probability distribution (e.g., a distribution of locations of male/female users) in a differentially private manner.

To illustrate the privacy of user attributes in an LBS, let us consider a running example in which users try to prevent an attacker from inferring whether they are or . Let and be the probability distributions of locations of the male and female users, respectively. Then the privacy of this attribute can be modeled as a property that the attacker has no idea on whether the actual location follows the distribution or after observing an obfuscated location.

This can be formalized in terms of -local differential privacy as follows. For each ,  we denote by the probability of observing an obfuscated location when an actual location is distributed over . Then we define the privacy of by:

which represents that the attacker cannot distinguish whether the users follow or (with degree of ).

To generalize this, we introduce a notion, called distribution privacy (DistP), that is, the differential privacy for probability distributions. Roughly, a local obfuscation mechanism with distribution privacy perturbs each single input so that the attacker cannot significantly gain any information on the probability distribution of inputs by observing an output of the obfuscation mechanism. More specifically, we say that a mechanism provides distribution privacy w.r.t. if, by observing an obfuscated location (’s output), no attacker can detect whether the actual location (’s input) is sampled from or 111In our setting, the attacker observes only a sampled output of , and not the exact histogram of ’s output distribution. Hence our approach deals only with local obfuscation by each user, and not with the sanitization of the whole histogram by the central LBS provider.. In this way, the privacy of attributes can be formalized using distribution privacy.

To achieve a stronger privacy of attributes, we introduce a mechanism, which we call the tupling mechanism, that not only perturbs an actual input, but also adds random dummy data to the output. Then we prove that the tupling mechanism provides distribution privacy. Moreover, we demonstrate by experiments that the tupling mechanism outperforms the popular mechanisms (the randomized response [8], the planar Laplace mechanism [3], and the planar Gaussian mechanism) in terms of distribution privacy and service quality, hence it is useful to protect the privacy of attributes.


Our contributions.   The main contributions of this work are given as follows:

  • We present a formal model for the privacy of user attributes in terms of differential privacy. In particular, we introduce the notion of distribution privacy (DistP) to model the difficulty of obtaining information on probability distributions (representing user attributes) from an output observed by an attacker.

  • We investigate the theoretical foundation of distribution privacy. More specifically, we present useful properties of distribution privacy, including compositionality, and some relationships among variants of distribution privacy. We also give an interpretation of distribution privacy in terms of the Bayes factor.

  • We investigate existing obfuscation mechanisms that can provide distribution privacy. In particular, we show that (extended) differential privacy mechanisms need to add much noise to provide distribution privacy.

    • We prove every -differential privacy mechanism provides -distribution privacy for some .

    • We show that an obfuscation mechanism with (e.g., the Gaussian mechanism) is not useful for distribution obfuscation when the domain is large.

    • We prove that every extended differential privacy mechanism (e.g., the Laplace mechanism) provides extended distribution privacy (XDistP) whose level is proportional to the -Wasserstein distance between the two distributions that we want to make indistinguishable.

    • We show that every -divergence privacy (e.g. KL-privacy) mechanism contributes to distribution obfuscation proportionally to the Earth mover’s distance (-Wasserstein distance) .

  • We present an obfuscation mechanism, called the tupling mechanism, that adds random dummies to the output. Then we show how these dummies obfuscate the original input distribution (resp. input value) in terms of distribution privacy (resp. differential privacy).

  • We apply the tupling mechanism to the protection of user attributes in location based services (LBSs), and evaluate it by experiments in the framework of distribution privacy. More precisely, we demonstrate by experiments how the tupling mechanism can obfuscate the distribution of actual user locations and make it more difficult to infer their attributes from their obfuscated location. Furthermore, we show that probabilistic dummy data improve each user’s expected service quality loss (SQL) in the LBS. We also discuss the relationships among utility, privacy, and the cost of adding dummy data.


Paper organization.   The rest of this paper is organized as follows. Section II introduces some background on privacy and metrics used in this paper. Section III introduces a formal model for the privacy of user attributes, and proposes the notions of distribution privacy and its extensions. Section IV generalizes the distribution privacy notion w.r.t. an arbitrary divergence, and presents their useful properties, including compositionality and relationships among privacy notions. Section V shows how existing mechanisms contribute to the obfuscation of probability distributions. Sections VI proposes the tupling mechanism, and shows that this mechanism provides distribution privacy, hence differential privacy. Section VII applies the tupling mechanism to the protection of attribute privacy in LBSs, and evaluates it in terms of distribution privacy and service quality loss by experiments. Section VIII presents related work and Section IX concludes. All proofs of technical results can be found in Appendix.

Ii Preliminaries

In this section we recall some notions of privacy and metrics used in this paper, including differential privacy, its variant notions, probability coupling, and the Wasserstein metrics.

Let (resp. ) be the set of positive (resp. non-negative) integers, and (resp. ) be the set of positive (resp. non-negative) real numbers. Let be the set of non-negative real numbers not grater than . Throughout the paper, we assume and .

Ii-a Notations for Probability Distributions

We use the following notations in the paper. We denote by the set of all probability distributions over a set , and by the number of elements in a finite set .

Given a finite set and a probability distribution , the probability of drawing a value from is denoted by . For a finite subset we define by: . Given a and a function , the expected value of over is defined by: .

   ;     // Draw a value from a given distribution
   ;     // Run with an input
  return   ;
Algorithm 1 Lifting of

For a randomized algorithm and a set we denote by the probability that given input , outputs one of the elements of . Given a randomized algorithm and a probability distribution over , we define as the distribution of the output produced by Algorithm 1. Formally, for a finite set , the lifting of w.r.t. is the function such that .

Ii-B Divergence

We first recall the notion of (approximate) max divergence, which is used to define differential privacy.

Definition 1 (Max divergence).

For , the -approximate max divergence between is:

The max divergence is defined by: .

Next we recall the notion of the -divergences [9]. As shown in Table I

, many divergence notions (e.g. Kullback-Leibler-divergence 

[10]) are instances of

Divergence Corresponding
KL-divergence
Reverse KL-divergence
Total variation
-divergence
Hellinger distance
TABLE I: Instances of -divergence

-divergence.

Definition 2 (-divergence).

Let be the collection of functions defined by:

Let be a finite set, and such that for every , implies . Then for an , the -divergence of from is defined as:

We denote by the set of all divergences over .

Ii-C Differential Privacy (Dp)

Differential privacy [1, 2] captures the idea that given two “adjacent” inputs and (from a set of data with an adjacency relation ), a randomized algorithm cannot distinguish from roughly (with degree of and up to some exceptions parameterized by ).

Definition 3 (Differential privacy, Dp).

Let and . A randomized algorithm provides -differential privacy (DP) w.r.t. an adjacency relation if for any two inputs and for any ,

where the probability is taken over the random choices in .

Clearly the protection of differential privacy is stronger for smaller and . It is known that the above definition is equivalent to the following one using max-divergence.

Proposition 1.

A randomized algorithm provides -DP w.r.t. iff for any ,  and .

Note that the sequential composition of differentially private mechanisms is differentially private: For any independent randomized algorithms , if each provides -DP, the sequential composition provides -DP.

Ii-D Differential Privacy Mechanisms and Sensitivity

Differential privacy can be achieved by a privacy mechanism, namely a randomized algorithm that adds probabilistic noise to a given input that we want to protect. The amount of noise added by some popular mechanisms (e.g. the Laplace mechanism and the exponential mechanism) depends on a utility function that maps a pair of input and output to a utility score. More precisely, the noise is added according to the “sensitivity” of , which we define as follows.

Definition 4 (Utility distance).

Given a utility function , the utility distance w.r.t is the function defined by:

Note that is a pseudometric. Hereafter we assume that for all , is logically equivalent to . Then the utility distance is a metric.

Definition 5 (Sensitivity w.r.t. an adjacency relation).

The sensitivity of a utility function w.r.t. an adjacency relation is defined as:

For example, the exponential mechanism, defined below, perturbs the output according to the sensitivity and provides -differential privacy w.r.t. .

Example 1 (Exponential mechanism [11]).

Let . The exponential mechanism is the randomized algorithm that, given an input , outputs with the probability:

Ii-E Extended Differential Privacy (Xdp)

We review the notion of extended differential privacy [12], which relaxes differential privacy by incorporating a metric . Intuitively, this notion guarantees that when two input values and are closer in terms of , then the output distributions are less distinguishable.

Definition 6 (Extended differential privacy, Xdp).

For a metric , we say that a randomized algorithm provides -extended differential privacy (XDP) if for all and for any ,

where the probability is taken over the random choices in .

Note that this notion can be seen as an extension of -DP from an adjacency relation to a metric on inputs.

Ii-F Probability Coupling

We recall the notion of probability coupling as follows.

Fig. 1: A coupling of and .
Definition 7 (Coupling).

Given two distributions and , a coupling of and is a distribution such that and are ’s marginal distributions, i.e., for each , and for each , . We denote by the set of all couplings of and .

Example 1 (Coupling as transformation of distributions).

As shown in Fig. 1, let be the distribution such that , , and , and be the distribution such that , , and . A coupling of and shows a way of transforming to . To construct from , for example, moves from the bin to .

Ii-G Wasserstein Metric

We then recall the -Wasserstein metric [13] between two distributions and , which is defined using a coupling of and as follows.

Definition 8 (-Wasserstein metric).

Let be a metric over , and . The -Wasserstein metric w.r.t. is defined by: for any two distributions ,

is also called the Earth mover’s distance.

The intuitive meaning of is the minimum cost of transportation from to in transportation theory. As illustrated in Fig. 1, we regard each distribution as the set of points where each point has weight , and we move some amount of weights in from a point to another in order to construct . We represent by the amount of weights moved from to .222The amount of weights moved from a point in is given by , while the amount moved into in is given by . Hence is a coupling of and . We denote by the cost (i.e., distance) of move from to . Then the minimum cost of the whole transportation is given by:

For instance, let us consider the coupling in Example 1. When the cost function is the Euclid distance over (e.g., ), then the transportation achieves the minimum cost .

On the other hand, the -Wasserstein metric represents the minimum largest move between points in a transportation from to . Specifically, in a transportation ,  represents the largest move from a point in to another in . For instance, in the coupling in Example 1, the largest move is (from to , and from to ). Such a largest move is minimized by a coupling that achieves the -Wasserstein metric:

We denote by the set of all couplings that achieve the -Wasserstein metric ; i.e.,

Then each can be computed by an efficient algorithm called North-West corner rule [14] when the function is submodular, i.e., for all ,

Ii-H Liftings of Relations

Finally, we recall the notion of the lifting of relations.

Definition 9 (Lifting of relations).

Given a relation , the lifting of is the maximum relation such that for any , there exists a coupling satisfying .

Note that by Definition 7, the coupling is a probability distribution over whose marginal distributions are and . If , then .

Example 2 (Lifted relation).

Let us consider the distributions and in Example 1. As shown in Fig. 1,  can be transformed to based on the coupling . Let us define an adjacency relation as . Then the coupling satisfies . By Definition 9, we obtain , i.e., is adjacent to by the lifted relation . This means that we can construct from by moving some mass from to only for each (i.e., only when is adjacent to by ).

Then we restrict the lifting in Definition 9 to use only the couplings that achieve the -Wasserstein metric .

Definition 10 (Lifting of relations for optimal transportation).

Let . Given a relation , the lifting of w.r.t. the -Wasserstein metric is the maximum relation such that for any , there exists a coupling satisfying and .

By Definition 9, for any , we have .

Iii Privacy Notions for Probability Distributions

In this section we introduce the privacy notion for user attributes (e.g., age and gender) and present its formal model. We first show a running example in which the privacy of user attributes is illustrated and modelled in terms of differential privacy. We then generalize this to the privacy notions for probability distributions, which we call distribution privacy. Intuitively, distribution privacy models the difficulty of inferring user attributes (represented by probability distributions) from the output observed by an attacker. In particular, we present three definitions that correspond to differential privacy (DP), its variants extended with a metric (XDP) and with probability (PDP), respectively. In Table II, we summarize the privacy notions and their abbreviations. Finally, we give an interpretation of distribution privacy in terms of Bayes factor.

Iii-a Modeling the Privacy of User Attributes in Terms of Dp

As a running example, we consider an LBS (location based service) in which each user queries an LBS provider for a list of restaurants nearby. By observing a user’s location, the LBS provider may be able to infer the user’s attributes (e.g., age, gender, and residence area) and activities (e.g., working, sleeping, and shopping) [5, 6, 7].

To hide a user’s exact geo-location from the provider, the user reveals only approximate information on to the provider [3]. More specifically, he applies a randomized algorithm , called a local obfuscation mechanism, to his location , and obtains an approximate information with the probability . When is the randomized response [8] (resp. the Laplace mechanism), it guarantees the protection of in terms of differential privacy (resp. extended differential privacy). However, to the best of our knowledge, no prior work has investigated how an obfuscation mechanism may or may not prevent the attacker (e.g., the LBS provider) from inferring user attributes by obfuscated locations.

To illustrate the privacy of user attributes, let us consider an example in which users try to prevent an attacker from inferring whether they are or by obfuscating their own exact locations using a mechanism . For each attribute , let be the prior distribution of the location of the users who have the attribute . Intuitively, (resp. ) represent an attacker’s belief on the location of the male (resp. female) users before the attacker observes an output of the mechanism . Then the privacy of can be modeled as a property that the attacker has no idea on whether the actual location follows the distribution or after observing an output of .

This can be formalized in terms of -local differential privacy. For each ,  we denote by the probability of observing an obfuscated location when an actual location is distributed over , i.e., . Then we define the privacy of by:

which represents that the attacker cannot distinguish whether the users follow or (with degree of ). In Section III-F, we will present the case where an attacker has a different belief on and .

Point obfuscation Distribution obfuscation
DP differential privacy DistP distribution privacy
  XDP extended differential privacy XDistP extended distribution privacy
  PDP probabilistic differential privacy PDistP probabilistic distribution privacy
TABLE II: Privacy notions

Iii-B Distribution Privacy (DistP) and Point Privacy (PointP)

The privacy of user attributes presented in Section III-A can be generalized to the differential privacy for probability distributions. We define the notion of distribution privacy as the differential privacy in which the input is a probability distribution of data rather than a value of data. This notion models a level of obfuscation that hides which distribution a data value is drawn from. Intuitively, we say that a randomized algorithm provides distribution privacy w.r.t. a set of distributions if, by observing the output of , we cannot detect from which distribution (among ) receives an input value.

Definition 11 (Distribution privacy).

Let and . Given an adjacency relation , we say that a randomized algorithm provides -distribution privacy (DistP) w.r.t. if its lifting provides -differential privacy w.r.t. , i.e., for all pairs and all ,  we have:

Given , we say provides -distribution privacy w.r.t. if it provides -distribution privacy w.r.t. .

For example, the privacy of a user attribute described in Section III-A can be formalized as -distribution privacy w.r.t. .

Note that distribution privacy is not a mathematically new notion but the differential privacy for distributions of data. To contrast with distribution privacy, we sometimes refer to the differential privacy for values of data as point privacy.

Iii-C Extended Distribution Privacy (XDistP)

Next we introduce an extended form of distribution privacy to a metric. Intuitively, extended distribution privacy guarantees that when two input distributions are closer, then the output distributions must be less distinguishable.

Definition 12 (Extended distribution privacy).

Let be a utility distance, and . We say that a function provides -extended distribution privacy (XDistP) w.r.t. if the lifting provides -extended differential privacy w.r.t. , i.e., for all and all , we have:

Iii-D Probabilistic Distribution Privacy (PDistP)

We next introduce an approximate notion of distribution privacy analogously to the notion of probabilistic differential privacy (PDP[15]. Intuitively, a randomized algorithm provides -probabilistic distribution privacy if it provides -distribution privacy with probability at least .

Definition 13 (Probabilistic distribution privacy).

Let . We say that a randomized algorithm provides -probabilistic distribution privacy (PDistP) w.r.t. if the lifting provides -probabilistic differential privacy w.r.t. , i.e., for all , there exists a such that , , and that for all ,  we have:

where the probability space is taken over the choices of randomness in .

By definition, -DistP is equivalent to -PDistP. In general, however, -DistP does not imply -PDistP, while -PDistP implies -DistP. We show the formal statements and their proofs in Appendix A-E.

Iii-E Interpretation by Bayes Factor

The interpretation of differential privacy has been explored in previous work [16, 12, 17] in particular using the notion of Bayes factor. Similarly, the meaning of distribution privacy can also be explained in terms of Bayes factor, which compares the attacker’s prior and posterior beliefs as follows.

Assume that the attacker has some belief on the input distribution before observing the output values of an obfuscater . We denote by a set of probability distributions (that we wish to make indistinguishable), and by

the prior probability that a distribution

is chosen from as the input to the obfuscater . By observing an output value of , the attacker updates his belief on the input distribution. We denote by

the posterior probability of a distribution

being chosen (from ) given an output value of .

For two distributions , the Bayes factor is defined as the ratio of the two posteriors divided by that of the two priors: . If the Bayes factor is far from  the attacker significantly updates his belief on the distribution by observing an output of .

Assume that provides

-distribution privacy. By Bayes’ theorem, we obtain:

Intuitively, if the attacker believes that is times more likely than before the observation, then he believes that is times more likely than after the observation. This means that for a small value of , distribution privacy guarantees that the attacker does not gain information on the distribution by observing the perturbed output .

In the case of extended distribution privacy, the Bayes factor is bounded above by . Hence the attacker gains more information on the distribution for a larger value of the distance .

Finally, as for probabilistic distribution privacy, the Bayes factor is bounded above by with probability at least . This means the attacker might gain information on the distribution with probability at most .

Iii-F Privacy Guarantee for Attackers with Close Beleifs

In the previous sections, we assume that we know the actual input distributions to learn how much noise is sufficient for a mechanism to make the distributions indistinguishable in terms of distribution privacy. However, an attacker may have a different belief on the distributions, for example, more accurate distributions obtained by more observations. For each , let be the distribution that we use to determine the value of , and be the distribution representing the attacker’s belief.

When a mechanism provides -DistP w.r.t. an adjacency relation and the attacker’s beliefs are close enough to the user’s, i.e., , then we obtain:

Similarly, when provides -XDistP, we have:

Therefore, when the attacker’s beliefs are closer to ours, then a stronger distribution privacy is guaranteed.

Iv Generalized Distribution Privacy

In this section we generalize the definitions of distribution privacy and its extended form w.r.t. an arbitrary divergence . Then we show their useful properties, including the compositionality and the relationships among privacy notions.

Iv-a Variants of Differential Privacy

To generalize distribution privacy, we first present a generalized formulation of privacy notions including differential privacy, which is parameterized with a divergence that determines the capability of distinguishing the output distributions.

Definition 14 (Dp w.r.t. adjacency relation and divergence).

For an , an adjacency relation and a divergence , we say that a randomized algorithm provides -differential privacy w.r.t. if for all ,

Note that -differential privacy is -differential privacy. -differential privacy is called --divergence privacy [18], and -differential privacy (KLP) is called -KL-privacy [18].

Next we generalize the notion of extended differential privacy (Definition 6) to an arbitrary divergence .

Definition 15 (Xdp w.r.t. divergence).

Let be a metric, , and . We say that a randomized algorithm provides -extended differential privacy w.r.t. if for all ,

Iv-B Generalized Distribution Privacy

In this section we generalize the notion of (extended) distribution privacy to an arbitrary divergence . The main aim of generalization is to present theoretical properties of distribution privacy in a more general form, and also to discuss distribution privacy based on the -divergences.

Intuitively, we say that a randomized algorithm provides -distribution privacy w.r.t. a set of pairs of distributions if for each pair , a divergence cannot detect which distribution (of and ) is used to generate ’s input value.

Definition 16 (DistP generalized to a divergence).

Let , and . We say that a randomized algorithm provides -distribution privacy w.r.t. if the lifting provides -differential privacy w.r.t. , i.e., for all ,

Given , we say provides -distribution privacy w.r.t. if it provides -distribution privacy w.r.t. . Furthermore, we say provides -distribution privacy if it provides -distribution privacy w.r.t. .

Next we introduce extended distribution privacy parameterized with a divergence . Intuitively, extended distribution privacy with a divergence guarantees that when two input distributions and are closer (in terms of a utility distance ), then the output distributions and must be less distinguishable (in terms of a divergence ).

Definition 17 (XDistP generalized to a divergence).

Let be a utility distance, , and . We say that a randomized algorithm provides -extended distribution privacy w.r.t. if the lifting provides -extended differential privacy w.r.t. , i.e., for all ,

Then -extended distribution privacy w.r.t. a set is defined analogously.

()
(a) Sequential composition with shared input.
()()
(b) Sequential composition with independent inputs.
Fig. 2: Two kinds of sequential compositions and .
Distribution privacy Extended distribution privacy
Sequential composition () is -DistP is -XDistP
is -DistP is -XDistP
Sequential composition () is -DistP is -XDistP
is -DistP is -XDistP
Sequential composition () is -DistP is -XDistP
is -DistP is -XDistP
Sequential composition ()