Practical and Robust Privacy Amplification with Multi-Party Differential Privacy

08/30/2019 ∙ by Tianhao Wang, et al. ∙ 0

When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multiple-party setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacy-utility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

To protect data privacy in the context of data publishing, the concept of differential privacy (DP) has been proposed, and has been widely adopted [21]. DP mechanisms add noise to the aggregated result such that the difference of whether or not an individual is included in the data is bounded. Recently, local differential privacy (LDP) has been deployed by industry. LDP differs from DP in that random noise is added by each user before sending the data to the central server. Thus, users do not need to rely on the trustworthiness of the company hosting the server. This desirable feature of LDP has led to wider deployment by industry (e.g., by Google [24], Apple [1], and Microsoft [19]). Meanwhile, DP is still deployed in settings where the centralized server can be trusted (e.g., the US Census Bureau plans to deploy DP technologies for the 2020 census [3]).

However, removing the trusted central party comes at the cost of utility. Since every user adds some independently generated noise, the effect of noise adds up when aggregating the result. While noise of scale (standard deviation)

suffices for DP, LDP has noise of scale on the aggregated result ( is the number of users). This gap is essential for eliminating the trust in the centralized server, and cannot be removed [13] by algorithmic improvement.

Recently, researchers introduced settings where one can achieve a middle ground between DP and LDP, in terms of both privacy and utility. This is typically achieved by introducing additional parties [15, 23, 7, 16]. One such setting is called the shuffler model, which introduces another party called the shuffler. Users perturb their information minimally, and then send encrypted version of the perturbed information to the shuffler, who shuffles the users’ information, and then sends them to the server. The server then decrypts the reports and aggregates the information. If the shuffler and the server collude, there is no privacy amplification, and the user obtains privacy only from perturbation. If the shuffler and the server can be trusted not to collude, then the shuffler learns nothing about the reported data (because of semantic-security of the encryption scheme), and the server learns less about each individual’s report because it cannot link a user to a report. In short, the role of the shuffler is to break the linkage between the users and the reports, thus providing some privacy boost. Due to the privacy boost, users can add less noise, while achieving the same level of privacy for the server. This boost, however, requires trusting that the shuffler will not collude with the server. This new model of LDP, which we call Multi-party DP (MDP), offers a different trade-off between trust and utility than DP and LDP.

Besides the shuffle-based approach, in the MDP model, there is also another interesting direction that uses homomorphic encryption [16]

. In particular, each user homomorphically encrypts his/her value using one-hot encoding. The auxiliary server then multiplies the ciphertexts in each location to get the aggregated result (i.e., a histogram), and adds noise to the histogram to provide DP privacy guarantee. Finally, the results are sent to the server. As one-hot encoding is used, the communication cost is large for big domains.

Since the MDP model involves many parties, there could be different patterns of interaction and collusion among the parties. The possibilities of these colluding parties and the consequences have not been systematically analyzed. For example, existing work proves the privacy boost obtained by shuffling under the assumption that the adversary observes the shuffled reports and knows the input values of the users (except the victim). However, if the other users collude with the adversary, they could also provide their locally perturbed reports, invalidating any privacy boost due to shuffling. For another example, while the homomorphic encryption-based approach provides privacy guaranteed when the the adversary consists of the users (except the victim) colluding with the server, there is no privacy when the adversary consists of the server colluding with the auxiliary server. In this paper, we analyze the interaction and potential threats of the MDP models in more detail. We present a unified view of privacy that generalizes DP and LDP. Different parties and possible collusions are then presented and analyzed. Existing protocols are also cast in this framework and analyzed.

Based on our observations, we propose a protocol called (stands for Multi Uniform Random Shufflers). is built on top of the shuffler-based approach. But compared to existing methods, which uses random response, supports Optimized Local Hash (), which provides better utility when the domain size is large. In , each user reports a randomly selected hash function, together with a perturbation of the hashed result of their sensitive value. To enable the usage of , we show that the essence of the privacy amplification [7] is a distribution from the LDP mechanism that is independent of the input value. Then we configure to identify such an independent distribution.

As a result of our analysis of the MDP model, we propose to have the auxiliary server also introduce noise. In , the auxiliary server, besides shuffling the received reports, adds some uniformly random reports so that when all other users collude with the server, there is still privacy guarantee. Moreover, we suggest having more auxiliary servers, which mitigates the threat that a single auxiliary server colludes with the server. As long as not all of the auxilliary servers collude with the server, the privacy amplification guarantee still holds.

To summarize, the main contributions of this paper are:

  • We present a unified view of DP, LDP, and recent enhancements, and propose a general framework of MDP.

  • We design a protocol that achieves better trust and better utility-privacy trade-off. The design of relies on (1) a theoretical improvement; (2) a thorough analysis of MDP; and (3) existing ideas from related fields.

  • We provide implementation details and measure utility and performance of on real datasets. Moreover, we will open source our implementation so that other researchers can build on our results.

Roadmap. In Section 2 we present the requisite background. Existing work whose goal is amplifying privacy guarantees is presented in Section 3. We then analyze our multi-party DP model in Section 4. Based on our analysis, we present our proposal in Section 5. Our evaluation is presented in Section 6. Related work is discussed in Section 7. Finally, we end with some concluding remarks in Section 8.

2 Background

We first review the privacy definitions and the corresponding algorithms used to satisfy these definitions. We then describe some cryptographic primitives that will be used throughout this paper.

2.1 Differential Privacy

Differential privacy is a rigorous notion about individual’s privacy in the setting where there is a trusted data curator, who gathers data from individual users, processes the data in a way that satisfies DP, and then publishes the results. Intuitively, the DP notion requires that any single element in a dataset has only a limited impact on the output.

Definition 1 (Differential Privacy).

An algorithm satisfies -DP, where , if and only if for any neighboring datasets and , and any set of possible outputs of , we have

Denote a dataset as . Two datasets and are said to be neighbors, or , iff there exsits at most one such that , and for other . When , we simplify the notation and call -DP as -DP. To satisfy -DP, one can use the Laplace Mechanism, described below:

Laplace Mechanism. The Laplace mechanism computes a function on the dataset in a differentially private manner, by adding to a random noise. The magnitude of the noise depends on , the global sensitivity or the sensitivity of . When outputs a single element, such a mechanism is given below:

In the above,

denotes a random variable sampled from the Laplace distribution with scale parameter

. When

outputs a vector,

adds a fresh sample of to each element of the vector. In the histogram counting queries, , and , where is the indicator function that the -th user’s value equals (more generally, , ).

2.2 Local Differential Privacy

Compared to the centralized setting, the local version of DP offers a stronger level of protection, because each user only reports the perturbed data. Each user’s privacy is still protected even if the aggregator is malicious.

In the local setting, each user perturbs the input value using an algorithm and reports to the aggregator.

Definition 2 (Local Differential Privacy).

An algorithm satisfies -local differential privacy (-LDP), where , if and only if for any input , and any set of possible outputs of , we have

Typically, the value used is (thus

-LDP). In LDP, most problems can be reduced to frequency estimation. We present two state-of-the-art protocols for this problem.

Random Response. The basic mechanism in LDP is called random response [40]. It was introduced for the binary case, but can be easily generalized to the categorical setting. Here we present the generalized version of random response (), which enables the estimation of the frequency of any given value .

Here each user with private value sends the true value

with probability

, and with probability sends a randomly chosen s.t. . More formally, the perturbation function is defined as

(1)

This satisfies -LDP since . To estimate the frequency of for , one counts how many times is reported, denoted by , and then computes

(2)

where is the indicator function that the report of the -th user equals , and is the total number of users.

Optimized Local Hashing. This protocol deals with a large domain size by first using a hash function to compress the input domain into a smaller domain of size , and then applying randomized response to the hashed value. In this protocol, both the hashing step and the randomization step result in information loss. The choice of the parameter is a tradeoff between loss of information during the hashing step and loss of information during the randomization step. It is shown in [37]

that the estimation variance as a function of

is minimized when .

In , one reports where is randomly chosen from a family of hash functions that hash each value in to , and is the perturbation function for random response, while operating on the domain (thus in Equation (1)). Let be the report from the ’th user. For each value , to compute its frequency, one first computes

, and then transforms it to its unbiased estimation

(3)

In [37], the estimation variances of and are analyzed and compared. Unlike , has a variance that does not depend on . As a result, for smaller (such that ), is better; but for large , is preferable.

2.3 Cryptographic Primitives

In addition to the mechanisms used to ensure privacy, in the setting where DP and LDP are commonly used, cryptographic primitives are also necessary to protect data from breaches. For example, when LDP is used, the reports are transmitted over the Internet, and the security of the communication channel relies on modern cryptography (more specifically, TLS). Cryptographic protocols can also be used in place of LDP and DP for privacy.

Additive Homomorphic Encryption. In Additive Homomorphic encryption [32], one can apply an algebraic operation (denoted by ) to two ciphertexts and , so that the result is a ciphertext of the sum of the corresponding plaintexts.

Mix-net (Shuffler). A mix-net (introduced in [14]) is a server that receives users’ inputs and sends them to another server in a way that breaks any linkage from the identifier of a user to the input of the user. In particular, the -th user’s data is encrypted with two layers, i.e., . The shuffler possesses and can decrypts the outer layer to obtain . The shuffler then generates a random permutation from to and send to the server ( is the number of users and is omitted when clear from context). The server decrypts the inner layer with to get the plaintext . During this process, the shuffler learns nothing about as it is encrypted. On the other hand, even of the server eavesdrops the network communication and learns was sent from user , the server cannot link the reported data to the individual user as it is shuffled.

3 Existing Privacy Amplification Techniques

Due to the large amount of noise and poor utility of LDP (especially compared to centralized DP), people investigated relaxations of the LDP trust model to achieve better utility. The intuition is to introduce an auxiliary party to hide fine-grained information. Users only need to trust that the auxiliary server does not collude with the original server. This trust model is better (which means safer for users) than DP’s where the centralized server is trusted, but worse than that of LDP where no party needs to be trusted.

3.1 Privacy Amplification via Shuffling

The shuffling idea was originally proposed in Prochlo [12], where a shuffler is inserted between the users and the server to break the linkage between the report and the user identification. The authors of Prochlo showed the intuition of the privacy benefit of shuffling; but the formal proof was given later [23]. Intuitively, if the users send data with LDP using privacy budget ; after shuffling, it is proved the users’ data is protected by -DP where . This phenomenon was also investigated by two other groups [15, 7]. Table 1 gives a summary of these results. Among them, [7] provides the strongest result in the sense that the is the smallest, and the use case is the most general.

Method Condition
[23]
[15] , binary
[7]
Table 1: Guarantee comparison. Each row corresponds to a method. The amplified only differs in constants. But the circumstances under which the method can be used are different. In the condition column, “binary” means only binary random response can be used.

In [7], the authors introduce a technique called blanket decomposition. The idea is to decompose the distribution of an LDP report into a linear composition of two distributions, one equivalent to the true value and the other independent (uniformly random). In particular, given in Equation (1) is decomposed into

where and . After shuffling, the histogram of (except the victim’s) such random variables can be regarded as some input values (determined by the sensitive input) plus freshly sampled noise added to each element of the histogram, where the noise comes from the blanket (i.e., the part) and follows

. Intuitively, the binomial distribution is close to Gaussian, which is a standard mechanism for satisfying

-DP. Theorem 1 gives this effect.

Theorem 1 (Binomial Mechanism, derived from Theorem 3.1 of [7]).

The binomial mechanism which adds independent noise to each component of the histogram satisfies -DP where

Thus, given , the authors of [7] can derive the guarantee of . Note that the derived is not tight. It is indicated in [7] that with other tighter bounds, or even numerical calculation, smaller can be derived. One limitation of [7] is that cannot be arbitrarily small, as is determined by . In the extreme case, even if is , .

3.2 DP from Homomorphic Encryption

When an additional party is introduced, we can leverage cryptographic tools to separate the information so that collectively they can achieve DP with higher utility but they cannot glean sensitive information from their own share of information. [16] proposes one instantiation of such an idea via homomorphic encryption, and we call it for short. In this approach, users first encode their data via one-hot encoding, i.e., encode using a bit-vector where only the -th bit is and others are all ’s. Then each bit is encrypted with homomorphic encryption; and the ciphertext is sent to the auxiliary party via a secure channel. Denote as the ciphertext of user ’s report on his/her ’s bit, the auxiliary server computes for each location . Then for each location , the auxiliary server generates random noise in a way that satisfies centralized DP, encrypts the noise, and multiply it with and sends the result to the server. The server decrypts the aggregated ciphertext. Since the auxiliary server only knows the amount of noise he added, the server must add another DP noise before publishing the information, in order to prevent the auxiliary server from learning the true results. Researches have abstracted the operations and provided general recipes of writing DP programs in this framework [16].

In this design, if the auxiliary server and the server collude, the auxiliary server can send the ciphertext of the individual data to the server; and the privacy guarantee is completely broken. To counter this threat, each user can add local noise to trade-off utility for the worst-case privacy guarantee. The key issues are that, due to one-hot encoding, (1) the communication/computation overhead is large, and (2) one cannot use to obtain better utility.

4 The Multi-party Differential Privacy Model

In this section, we formalize the model of multi-party differential privacy (which we call MDP). The goal is to examine different aspects of the model, and identify the important factors one should consider when designing a protocol in this model. We start by analyzing different aspects of the model. We then cast existing privacy amplification work reviewed in Section 3 in our model. Finally, we present new observations that lay the basis for our proposed method described in Section 5.

4.1 System Model

The Parties. There are four types of parties: users, the server, the auxiliary server, and the analyst. Figure 1 shows the interactions among them. Users send information to the auxiliary server. The auxiliary server processes the users’ information and then forwards the assembled information to the server. From the aggregated information, the server finally derives results for queries issued by the analyst. The auxiliary server does not exist in the traditional models of DP and LDP. In (in Section 3.1), the auxiliary server is the shuffler; and in (in Section 3.2), the auxiliary server is a cryptographic server.

The Adversaries. From the point of view of a single user, other parties, including the auxiliary server, the server, the analyst, and other users, could all be malicious. In DP and LDP, there is no auxiliary server, and identifying the adversary model is straightforward. In LDP, the adversary is assumed to control all parties other than the user, including the server and other users. In DP, the adversary is the data analyst, who is assumed to control all other users; but the server is trusted. The goal of introducing multi-party models is to both avoid placing trust in any single entity, and achieve better utility than LDP. As a result, one has to consider multiple adversaries, each controlling a different combination of parties involved in the protocol. A user’s levels of privacy against different adversaries are different.

We assume all parties have the same level of (very powerful) background knowledge, i.e., all other users’ information except the victim’s. This assumption essentially enables us to argue DP-like guarantee for each party, which will be introduced later.

The prominent adversary is the server. Other parties can also be adversaries but are not the focus because they have less information. In particular, the analyst’s knowledge is strictly less than the server’s, because the analyst’s queries are answered by the server. The auxiliary server knows some additional knowledge (e.g., the linkage as in ), but such information is meaningless unless the server colludes with it.

Additional Threat of Collusion. Existing work of MDP only considers the privacy guarantee against the server, but we note that in the multi-party setting, one needs to consider the consequence when different parties collude. In general, there are many combinations of colluding parties. And understanding the consequences of these scenarios enables us to better analyze and compare different approaches.

In particular, the server can collude with the auxiliary server. In this case, the system model is reduced to that for LDP. If the protocol does not consider the possibility of this collusion (e.g., ), the guarantee will then be worse than the LDP guarantee. On the other hand, if the server colludes with the users (except the victim), the privacy guarantee could downgrade to LDP as well. The server can also collude with both the auxiliary server and other users.

Other combinations are possible but less severe. Specifically, we ignore the analyst as its information is strictly less than the analyst’s. And there is no benefit if the auxiliary server colludes with the users.

To summarize, we consider all potential collusions and identify three important (sets of) adversaries: (1) the server itself, (2) the server colluding with other users, and (3) the server with the auxiliary server.

Figure 1: Interactions among parties. Users send the processed data to the auxiliary server. The auxiliary server assembles and processes the users’ data, and forward them to the server. The server can answer queries issued by the analyst.

4.2 Multi-party Differential Privacy

Given the four possible (sets of) adversaries, directly analyzing the privacy property of a method becomes challenging. Existing methods only prove the DP guarantee for the server only. To quantitatively compare different methods, we propose a unified DP notion, called MDP (Multi-party DP), that models different parties by specifying its views.

In particular, we assume the parties follow the protocol. Each party is associated with an algorithm; and the party observes the output of this algorithm. Although the party’s view is interactive, i.e., there is timing information about observations and they may depend on what the party sends out, we argue that the distribution of the observation is time and input independent. To argue for the privacy guarantee against each party, one proves DP bound of the algorithm whose output is the party’s observation.

Definition 3 (Multi-party Differential Privacy).

We say that a protocol satisfies -DP against a set of parties, where , when the following is true. Let denote the (randomized) function that takes the input dataset(s) and outputs the views of all parties in . Then for any neighboring datasets and , and any set of possible outputs of , we have

Note that the cryptographic primitives are assumed to be safe (i.e., the adversaries are computationally bounded and cannot learn any information from the ciphertext).

This abstraction from (sets of) parties’ views to algorithms relies on the protocol being data-independent, i.e., the behavior of each party does not depend on its input. If the parties can deviate from the prescribed procedure, one should consider the deviation procedure and prove the guarantee under the deviated procedure. We discuss the possible deviations in Section 4.4.

Atomic Algorithms. To simplify the analysis of the potential adversaries in a protocol, we first introduce a set of atomic operations. Given that, the views of different potential adversaries can be modeled as combinations of these operations. Table 2 gives the list of atomic operations.

Algorithm Procedure
Shuffle input
Perturb a single record (LDP)
Aggregate input to a histogram
Perturb the aggregated result (DP)
Table 2: List of atomic operations.

Among the possible local randomizer , two of them are given in Section 2.2. For DP algorithm , one needs to aggregate the results first (i.e., run the aggregation function ), and then add independent noise to each count of the histogram (we consider the basic task of histogram estimation or counting query, which serves as the building block for other tasks).

Note that is described for each value and is -LDP. The server actually receives . We can argue that is DP to the server.

Theorem 2.

If is -LDP, then is -DP.

The intuition is that as each singleton is protected by DP, the whole dataset is also protected by DP. The proof is deffered to Appendix A.

4.3 Analyzing Existing Methods

Now we identify interactions and possible threats for and introduced in Section 3, and model them with different algorithms. Without loss of generality, we assume that the -th user is the victim. And we denote as the vector of without the -th item. We use the concatenation operator to denote the composition of algorithms, e.g., . The analysis of and are given as follows:

Analyzing . The server can be modeled by , as the shuffling operation is applied to the LDP reports. If the users collude with the server, the server’s view can be modeled as . Note that the two instances of are the same. The adversary can subtract from the aggregated to obtain , which is equivalent to saying . Finally, if the auxiliary server colludes with the server, the model fall back to the DP setting and server’s view can be modeled as .

Analyzing . In this method, the server has access to , which satisfies DP. If the users collude with the server, it can be modeled as . The server can then subtract from and obtain . As satisfies DP, also satisfies DP. If the auxiliary server colludes with the server, the server’s view can be modeled as , because the auxiliary server can send the ciphertexts to the server, which are encrypted version of the users’ values; and the server can decrypt them to obtain , thus there is no privacy guarantee for (and we use to present its privacy guarantee). Table 3 gives the summary of algorithms and guarantees of the four types of adversaries.

Server Server + Users Server + Auxi.
Guarantee
Guarantee
Table 3: List of adversaries and their privacy guarantees in and . The value of are the same and thus ignored. We use when there is no privacy guarantee.

4.4 Robustness to Perturbation

So far, we mostly assume all the parties to be honest but curious. It is also interesting to examine the consequences when parties deviate from the protocol. There could be multiple reasons for each party to be malicious to (1) interrupt the process, (2) infer sensitive information (break privacy), and (3) downgrade the utility. We analyze the three concerns one by one.

The first concern is easy to mitigate. If a user blocks the protocol, his report can be ignored. If the auxiliary server denies the service, the server can find another auxiliary server and redo the protocol. Note that in this case, users need to remember their report to avoid averaging attacks. Finally, the server has no incentive to disrupt the process started by himself.

Second, it is possible that the auxiliary server deviates from the protocol by not shuffling things (in the method) or not adding noise (in the method), thus the server has access to (in the method) or (in the method). In these cases, the server can learn more information, but the auxiliary server does not have benefits except saving some computational power. And if the auxiliary server colludes with the server, they can learn more information without this deviation. Thus we assume the auxiliary server will not deviate to infer sensitive information, and do not bother with this concern. Note that however, trusted hardware or verifiable computation techniques can help mitigate this concern.

Third, we note that any party can downgrade the utility. The most straight-forward way is to generate many fake users, and let them join the protocol. This kind of sybil attack is hard to defend against without some offline authentication, which is orthogonal to the focus of this paper. As there are many users, we want to limit each user’s contribution such that the most the user can do to downgrade the utility is to change his original value or register fake accounts. satisfies this property. For , as one-hot encoding is used, the user has to prove by zero knowledge that his report is well-formed. On the other hand, the power of the auxiliary server is much greater as it collects all the data. For this paper, we assume the auxiliary server follows the protocol. Whether it is possible and how to verify this fact are interesting open questions.

To summarize, we assume the server and the auxiliary server follow the protocol, as there is no benefit deviating from the procedure. We are mainly concerned about the users adding too much noise disrupting the utility.

4.5 Discussion and Key Observations

In this section, we first systematically analyze the system model of the MDP model. From the privacy perspective, we then propose Definition 3 to quantitatively analyze different methods. Finally, we discuss the potential concern of malicious parties. Several observations and lessons are worth noting:

Consider Different Threats. Throughout our analysis, we have identified three potential adversaries, modeled as through . We note that in the MDP model, one needs to consider all possible threats.

When Auxiliary Server Colludes: No Amplification. When the aggregator colludes with the auxiliary server, the MDP falls back to the original DP model. In , there is still LDP guarantee as each user adds local noise. In , there is no privacy protection at all, as each user sends their true values.

When Users Collude: Possibility Missed in . When we prove for DP/LDP guarantees, we assume the adversary has access to all users’ true sensitive values except the victim, i.e., . We note that in , such an adversary may also have access to . Such cases include the users (except the victim) collude with the server; or the server is controlling the users (except the victim). Of course there could also be cases that the adversary knows but not , e.g., the users report true values on another website, which colludes with the server. But we note that in this case, the victim’s true value may also be leaked.

Thus, it is possible but less meaningful to assume the adversary knows but not as in , which makes the shuffle-based amplification less intuitive in real-world scenarios. On the other hand, in , there is still strong privacy guarantee when users collude.

5 : Multi Uniform Random Shufflers

Based on the observations described in Section 4.5, we advocate adding noise from both the user side and the server side. In this section, we describe our proposed protocol , which stands for Multi Uniform Random Shufflers. adopts two existing techniques, i.e., (1) the model and (2) introduction of multiple auxiliary servers. Based on that, has two novelties: (1) we theoretically improve the privacy amplification technique in ; and (2) we propose to have both the auxiliary servers and the users add noise, and develop corresponding privacy guarantees.

The protocol improves over existing techniques in both utility and privacy. Moreover, is flexible and allows configuration of different levels of resistance to the three adversaries in MDP.

In what follows, we first present the reasons behind the design choices. Then we provide the details of our techniques.

5.1 Design Choices

Choosing the Model. We choose the model instead of the model mainly because it is more computation- and communication-efficient, and it can scale well with the domain .

In , as one-hot encoding is used, the communication and computation overheads is large. In particular, for users with domain , the communication bandwidth is times the size of the HE ciphertext. In addition, each estimation requires expensive HE operations.

Multiple Auxiliary Servers. As there is no privacy advantage once the two servers collude, we borrow the idea of involving more auxiliary servers, which is a standard approach in other settings (e.g., voting, private messaging), to mitigate this threat. However, as long as the server cannot collude with all the auxiliary servers, there is still some privacy amplification effect, but this introduces more communication cost.

5.2 Extend with

As pointed in Section 2.2, has poor utility when the domain size is large. Moreover, if the central privacy budget is small, there is no privacy amplification. To overcome these two shortcomings, we propose to use in .

Each user runs the perturbation function of locally. That is, each user first select a random hash function , and then hash the value into a smaller domain . Different from , in this setting, we require rather than . For simplicity, we assume is an integer. The user then perturbs to , and reports and together.

Specifically, given auxiliary servers, the user obtains their public keys and the server’s public key . The user then encrypts with reverse layered public-key encryption and sends to the first auxiliary server. Each of auxiliary server decrypts one layer of the reports using the corresponding secret key , shuffles the result, and sends them to the next auxiliary server. Finally, the server decrypts the inner-most layer to obtain the results and evaluates them as described in Equation (3).

Now we analyze the privacy guarantee of using by examine the three adversaries (1) to (3) as listed in Table 3. Given that each user applies an -DP locally, the adversary (3) is -DP. This also holds for adversary (2) , as the server can obtain the user report by colluding with the users. But for adversary (1), the privacy guarantee is different from . In particular, we have:

Theorem 3.

When using -DP in , is -DP, where

(4)
Proof.

We utilize Theorem 1 in the proof. The key is to derive the privacy blanket which follows the Binomial distribution.

Given the report for each user . We can reconstruct a bit-vector of size where if . As the is a random hash function, the bits can be regarded as independent, i.e., if , and if . Now for any two neighboring datasets , w.l.o.g., we assume they differ in the -th value, and in , in . By the independence of the bits, other locations are equivalent and can be canceled out. Thus we only need to examine the summation of bits for location and . For each location, there are users, each reporting the bit with probability

where , , and denotes the xor operation. After shuffling, the histogram of (except the victim’s) such random variables follows . As there are two locations, by Theorem 1, we have .

Input:
Init
for  do
     Receive
     
for  do Add noise
      Default hash family in
      Perturbation
     if  then
         
     else
               
     
Shuffle
if  then
     Send to server
else
     Send to auxiliary server
Algorithm 1 Auxiliary Server
Input:
Init
Receive from auxiliary server
for  do
     
for  do New aggregation function
     Compute
Algorithm 2 Server

5.3 Fake Response from Auxiliary Server

We propose to equip the auxiliary servers with an additional function of adding some random reports. These random reports serve the purpose of DP noise. If the server colludes with some auxiliary servers, he recovers part of the added random reports. Specifically, each auxiliary server adds values drawn from the input domain uniformly at random. Given auxiliary servers, there are uniform reports. The added values are then perturbed by the same algorithm used by the users, and shuffled with the received reports. The server, after aggregation, subtracts from the estimation, where is the size of the domain. Algorithms 1 and 2 give the procedure for the auxiliary server and server in , respectively. The algorithms by default use assuming is large. When is smaller, will switch to to have better utility. We will give more guideline on when to switch later.

To argue about privacy guarantee in this method, we need to identify through . First, the server’s view is the shuffled reports; and the reports are from users and some randomly drawn input. We have , where here concatenates two tuples. If the users collude, the server can subtract from and obtain . If all the auxiliary server colludes with the server, the permutation and the added reports are revealed, and one have . The following theorem gives the precise privacy guarantee for each of the adversaries:

Theorem 4.

In , given that is -DP, is -DP, and is -DP, where

(5)
(6)

Note that one can also use in , and we have a similar theorem:

Theorem 5.

In with , given that is -DP, is -DP, and is -DP, where

The proofs are deferred to Appendix A. The intuition is that, and , essentially add binomial noise to the histogram; and one can follow Theorem 3 to argue they satisfy DP.

5.4 Utility Analysis

We analyze the utility of from the perspective of the server. In particular, we measure the expected squared error

where is the true frequency of value , and from Algorithm 2 is the server’s estimation of . We first show is unbiased.

Lemma 6.

The server’s estimation from Algorithm 2 is an unbiased estimation of , i.e.,

The proof is deferred to Appendix A. Given that, the expected squared error equals variance, i.e.,

Now we derive the expected squared error of . For alone, was shown in Equation (3). Its variance has been shown in [37], but note that our setting is slight different, as we have a total population of users, and we need to subtract from the result.

Theorem 7.

If is -DP, and fix and , the expected squared error is bounded by

where satisfies the Equation (6).

Proof.

With , , and , we adapt the proof of [37]:

Here for each of the users, if his true value is , we have , and there are of them; otherwise, we have for the rest users. For the uniform responses, as their values are randomly sampled, we have , where . Together, we have

(7)

Finally, as is independent of ,

If is used in , we have similar analyses, presented below. The proofs are deferred to Appendix A.

Lemma 8.

The server’s estimation with is an unbiased estimation of , i.e.,

Theorem 9.

If is -DP, and fix and , using , the expected squared error is bounded by

where satisfies the Equation (6).

5.5 Discussion

The proposed strengthens from three perspectives: First, it improves utility of by applying a configured primitive. Second, it provides better privacy guarantee when users collude with the server, which is an assumption made in DP. Third, it makes the threat of the server colluding with the auxiliary server more difficult. Given the main techniques, there are issues and extensions we want to discuss.

Choosing or . In [37], there is clear guideline for choosing or , based on domain size . Here, as given in Theorems 7 and 9, the choice depends on more parameters, and thus is more complicated. We can numerically compare the utility of and .

Choosing Parameters. Given the desired privacy level