1 Introduction
Software and service providers increasingly see the collection and analysis of user data as key to improving their services. Datasets of user interactions give insight to analysts and provide training data for machine learning models. But the collection of these datasets comes with risk—can the service provider keep the data secure from unauthorized access? Misuse of data can violate the privacy of users and substantially tarnish the provider’s reputation.
One way to minimize risk is to store less data: providers can methodically consider what data to collect and how long to store it. However, even a carefully processed dataset can compromise user privacy. In a now famous study, (Narayanan & Shmatikov, 2008) showed how to deanonymize watch histories released in the Netflix Prize, a public recommender system competition. While most providers do not intentionally release anonymized datasets, security breaches can mean that even internal, anonymized datasets have the potential to become privacy problems.
Fortunately, mathematical formulations exist that can give the benefits of populationlevel statistics without the collection of raw data. Local differential privacy (Duchi et al., 2013a, b)
is one such formulation, requiring each device (or session for a cloud service) to share only a noised version of its raw data with the service provider’s logging mechanism. No matter what computation is done to the noised output of a locally differentially private mechanism, any attempt to impute properties of a single record will have a significant probability of error. But not all differentially private mechanisms are equal when it comes to utility: some mechanisms have better accuracy than others for a given analysis, amount of data, and desired privacy level.
Private distribution estimation.
This paper investigates the fundamental problem of discrete distribution estimation under local differential privacy. We focus on discrete distribution estimation because it enables a variety of useful capabilities, including usage statistics breakdowns and countbased machine learning models, e.g. naive Bayes
(McCallum et al., 1998). We consider empirical, maximum likelihood, and minimax distribution estimation, and study the price of local differential privacy under a variety of loss functions and privacy regimes. In particular, we compare the performance of two recent local privacy mechanisms: (a) the Randomized Aggregatable PrivacyPreserving Ordinal Response (
Rappor) (Erlingsson et al., 2014), and (b) the ary Randomized Response (RR) (Kairouz et al., 2014) from a theoretical and empirical perspective.Our contributions are:

[leftmargin=*]

For ary alphabets, we show that Rappor is order optimal in the high privacy regime and strictly suboptimal in the low privacy regime for and losses using an empirical estimator. Conversely, RR is order optimal in the low privacy regime and strictly suboptimal in the high privacy regime (Section 4.1).

Large scale simulations show that the optimal decoding algorithm for both RR and Rappor
depends on the shape of the true underlying distribution. For skewed distributions, the
projected estimator (introduced here) offers the best utility across a wide variety of privacy levels and sample sizes (Section 4.4). 
For open alphabets in which the set of input symbols is not enumerable a priori we construct the ORR mechanism (an extension to RR using hash functions and cohorts) and provide empirical evidence that the performance of ORR meets or exceeds that of Rappor over a wide range of privacy settings (Section 5).

We apply the ORR mechanism to closed ary alphabets, replacing hash functions with permutations. We provide empirical evidence that the performance of ORR meets or exceeds that of RR and Rappor in both low and high privacy regimes (Section 5.4).
Related work. There is a rich literature on distribution estimation under local privacy (Chan et al., 2012; Hsu et al., 2012; Bassily & Smith, 2015), of which several works are particularly relevant herein. (Warner, 1965) was the first to study the local privacy setting and propose the randomized response model that will be detailed in Section 3. (Kairouz et al., 2014) introduced RR and showed that it is optimal in the low privacy regime for a rich class of information theoretic utility functions. RR will be extended to open alphabets in Section 5.1. (Duchi et al., 2013a, b) was the first to apply differential privacy to the local setting, to study the fundamental tradeoff between privacy and minimax distribution estimation in the high privacy regime, and to introduce the core of Rappor. (Erlingsson et al., 2014) proposed Rappor, systematically addressing a variety of practical issues for private distribution estimation, including robustness to attackers with access to multiple reports over time, and estimating distributions over open alphabets. Rappor has been deployed in the Chrome browser to allow Google to privately monitor the impact of malware on homepage settings. Rappor will be investigated in Sections 4.2 and 5.2.
Private distribution estimation also appears in the global privacy context where a trusted service provider releases randomized data (e.g., NIH releasing medical records) to protect sensitive user information (Dwork, 2006; Dwork et al., 2006; Dwork & Lei, 2009; Dwork, 2008; Diakonikolas et al., 2015; Blocki et al., 2016).
2 Preliminaries
2.1 Local differential privacy
Let be a private source of information defined on a discrete, finite input alphabet . A statistical privatization mechanism is a family of distributions that map to with probability . , the privatized version of , is defined on an output alphabet that need not be identical to the input alphabet . In this paper, we will represent a privatization mechanism via a
rowstochastic matrix. A conditional distribution
is said to be locally differentially private if for all , and all , we have that(1) 
where and (Duchi et al., 2013a) . In other words, by observing , the adversary cannot reliably infer whether or (for any pair and ). Indeed, the smaller the is, the closer the likelihood ratio of to is to 1. Therefore, when is small, the adversary cannot recover the true value of reliably.
2.2 Private distribution estimation
The private multinomial estimation problem is defined as follows. Given a vector
on the probability simplex , samples are drawn i.i.d. according to . An locally differentially private mechanism is then applied independently to each sample to produce , the sequence of private observations. Observe that the ’s are distributed according to and not . Our goal is to estimate the distribution vector from .Privacy vs. utility. There is a fundamental tradeoff between utility and privacy. The more private you want to be, the less utility you can get. To formally analyze the privacyutility tradeoff, we study the following constrained minimization problem
(2) 
where
is the minimax risk under , is an application dependent loss function, and is the set of all locally differentially private mechanisms.
This problem, though of great value, is intractable in general. Indeed, finding minimax estimators in the nonprivate setting is already hard for several loss functions. For instance, the minimax estimator under loss is unknown even until today. However, in the high privacy regime, we are able to bound the minimax risk of any differentially private mechanism .
Proposition 1
For the private distribution estimation problem in (2), for any locally differentially private mechanism , there exist universal constants such that for all ,
and
Proof
See (Duchi et al., 2013b).
This result shows that in the high privacy regime (), the effective sample size of a dataset decreases from to . In other words, a factor of extra samples are needed to achieve the same minimax risk. This is problematic for large alphabets. Our work shows that (a) this problem can be (partially) circumvented using a combination of cohortstyle hashing and RR (Section 5), and (b) the dependence on the alphabet size vanishes in the moderate to low privacy regime (Section 4.3).
3 Binary Alphabets
In this section, we study the problem of private distribution estimation under binary alphabets. In particular, we show that Warner’s randomized response model (WRR) is optimal for binary distribution minimax estimation (Warner, 1965). In WRR, interviewees flip a biased coin (that only they can see the result of), such that a fraction of participants answer the question “Is the predicate true (of you)?” while the remaining particants answer the negation (“Is true?”), without revealing which question they answered. For (), WRR can be described by the following rowstochastic matrix
(3) 
It is easy to check that the above mechanism satisfies the constraints imposed by local differential privacy.
Theorem 2
For all binary distributions , all loss functions , and all privacy levels , is the optimal solution to the private minimax distribution estimation problem in (2).
Proof sketch. (Kairouz et al., 2014) showed that WRR dominates all other differentially private mechanisms in a strong Markovian sense: for any binary differentially private mechanism , there exists a stochastic mapping such that . Therefore, for any risk function that obeys the data processing inequality ( for any stochastic mappings and ), we have that for any binary differentially private mechanism . In Supplementary Section A, we prove that obeys the data processing inequality, thus WRR achieves the optimal privacyutility tradeoff under minimax distribution estimation.
4 ary Alphabets
Above, we saw that WRR is optimal for all privacy levels and all loss functions. However, it can only be applied to binary alphabets. In this section, we study optimal privacy mechanisms for ary alphabets. We show that under and losses, Rappor is order optimal in the high privacy regime and suboptimal in the low privacy regime. Conversely, RR is order optimal in the low privacy regime and suboptimal in the high privacy regime.
4.1 The ary Randomized Response
The ary randomized response (RR) mechanism is a locally differentially private mechanism that maps stochastically onto itself (i.e., ), given by
(4) 
RR can be viewed as a multiple choice generalization of the WRR mechanism (note that RR reduces to WRR for ). In (Kairouz et al., 2014), the RR mechanism was shown to be optimal in the low privacy regime for a large class of information theoretic utility functions.
Empirical estimation under RR. It is easy to see that under , outputs are distributed according to:
(5) 
The empirical estimate of under is given by
where is the empirical estimate of and
(7) 
via the ShermanMorrison formula. Observe that because almost surely, almost surely.
Proposition 3
For the private distribution estimation problem under RR and its empirical estimator given in (4.1), for all , , and , we have that
and for large n,
where means .
Constraining empirical estimates to . It is easy to see that . However, some of the entries of can be negative (especially for small values of ). Several remedies are available, including (a) truncating the negative entries to zero and renormalizing the entire vector to sum to 1, or (b) projecting onto the probability simplex. We evaluate both approaches in Section 4.4.
4.2 Rappor
The randomized aggregatable privacypreserving ordinal response (Rappor) is an open source Google technology for collecting aggregate statistics from endusers with strong local differential privacy guarantees (Erlingsson et al., 2014). The simplest version of Rappor, called the basic onetime Rappor and referred to herein as Rappor, first appeared in (Duchi et al., 2013a, b). Rappor maps the input alphabet of size to an output alphabet of size . In Rappor, we first map deterministically to , the dimensional Euclidean space. Precisely, is mapped to , the standard basis vector in . We then randomize the coordinates of independently to obtain the private vector . Formally, the coordinate of is given by: with probability and with probability . The randomization in is locally differentially private (Duchi et al., 2013a; Erlingsson et al., 2014).
Under Rappor, is a dimensional binary vector, which implies that
(10) 
for all and .
Empirical estimation under Rappor. Let be the matrix formed by stacking the row vectors on top of each other. The empirical estimator of under Rappor is:
(11) 
where . Because converges to almost surely, converges to almost surely. As with RR, we can constrain to through truncation and normalization or through projection (described in Section 3), both of which will be evaluated in Section 4.4.
Proposition 4
For the private distribution estimation problem under Rappor and its empirical estimator given in (11), for all , , and , we have that
and for large n,
where means .
4.3 Theoretical Analysis
We now analyze the performance of RR and Rappor relative to maximum likelihood estimation (which is equivalent to empirical estimation) on the nonprivatized data . In the nonprivate setting, the maximum likelihood estimator has a worst case risk of under the loss, and a worst case risk of under the loss (Lehmann & Casella, 1998; Kamath et al., 2015).
Performance under RR. Comparing Equation (4.1) to the observation above, we can see that an extra factor of samples is needed to achieve the same loss as in the nonprivate setting. Similarly, from Equation (4.1), a factor of samples is needed under the loss. For small , the sample size is effectively reduced to (under both losses). When compared to Proposition 1, this result implies that RR is not optimal in the high privacy regime. However, for , the sample size is reduced to (under both losses). This result suggests that, while RR is not optimal for small values of , it is “order” optimal for on the order of . Note that RR provides a natural interpretation of this low privacy regime: specifically, setting translates to telling the truth with probability and lying uniformly over the remainder of the alphabet with probability ; an intuitively reasonably notion of plausible deniability.
Performance under Rappor. Comparing Equation (4.2) to the observation at the beginning of this subsection, we can see that an extra factor of samples is needed to achieve the same as in the nonprivate case. Similarly, from Equation (4.2), an extra factor of samples is needed under the loss. For small , is effectively reduced to (under both losses). When compared to Proposition 1, this result implies that Rappor is “order” optimal in the high privacy regime. However, for , is reduced to (under both losses). This suggests that Rappor is strictly suboptimal in the moderate to low privacy regime.
Proposition 5
For all and all ,
(14) 
where is the empirical estimate of under RR, is the empirical estimate of under Rappor, and is the empirical estimator under Rappor.
Proof
See Supplementary Section D.
4.4 Simulation Analysis
To complement the theoretical analysis above, we ran simulations of RR and Rappor varying the alphabet size , the privacy level , the number of users , and the true distribution from which the samples were drawn. In all cases, we report the mean over evaluations of where is the ground truth sample drawn from the true distribution and is the decoded RR or Rappor distribution. We vary over a range that corresponds to the moderatetolow privacy regimes in our theoretical analysis above, observing that even large values of can provide plausible deniability impossible under unnoised logging.
We compare using the distance of the two distributions because in most applications we want to estimate all values well, emphasizing neither very large values (as an or higher metric might) nor very small values (as information theoretic metrics might). Supplementary Figures 5 and 6, analogous to the ones in this section, demonstrate that the choice of distance metric does not qualitatively affect our conclusions on the decoding strategies for RR or Rappor nor on the regimes in which each is superior.
The distributions we considered in simulation were binomial distributions with parameter in
, Zipf distribution with parameter in , multinomial distributions drawn from a symmetric Dirichlet distribution with parameter, and the geometric distribution with mean
. The geometric distribution is shown in Supplementary Figure 4. We focus primarily on the geometric distribution here because qualitatively it shows the same patterns for decoding as the full set of binomial and Zipf distributions and it is sufficiently skewed to represent many realworld datasets. It is also the distribution for which Rappor does the best relative to RR over the largest range of and in our simulations.4.4.1 Decoding
We first consider the impact of the choice of decoding mechanism used for RR and Rappor. We find that the best decoder in practice for both RR and Rappor on skewed distributions is the projected decoder which projects the or onto the probability simplex using the method described in Algorithm 1 of (Wang & CarreiraPerpiñán, 2013). For RR, we compare the projected empirical decoder to the normalized empirical decoder (which truncates negative values and renormalizes) and to the maximum likelihood decoder (see Supplementary Section F.1). For Rappor, we compare the standard decoder, normalized decoder, and projected decoder. Figure 1 shows that the projected decoder is substantially better than the other decoders for both RR and Rappor for the whole range of and for the geometric distribution. We find this result holds as we vary the number of users from to and for all distributions we evaluated except for the Dirichlet distribution, which is the least skewed. For the Dirichlet distribution, the normalized decoder variant is best for both RR and Rappor. Because the projected decoder is best on all the skewed distributions we expect to see in practice, we use it exclusively for the openalphabet experiments in Section 5.
4.4.2 RR vs Rappor
loss with 90% confidence intervals over 50 samples. Free parameters are set via grid search over
, , for each . Note that the Rappor and ORappor lines in *fig:orr_vs_rappor:closed are nearly indistinguishable. Baselines indicate expected loss from (1) using an empirical estimator directly on the inputand (2) using the uniform distribution as the
estimate.To construct a fair, empirical comparison of RR and Rappor, we employ the same methodology used above in selecting decoders. Figure 2 shows the difference between the best RR decoder and the best Rappor decoder (for a particular and ). For most cells, the best decoder is the projected decoder described above.
Note that the best Rappor decoder is consistently better than the best RR decoder for relatively large and low . However, RR is slightly better than Rappor in all conditions where (bottomright triangle), an empirical result for that complements Proposition 5’s statement about ML decoders in . All of the skewed distributions manifest the same pattern as the geometric distribution. As the number of users increases, RR’s advantage over Rappor in the low privacy environment shrinks. In the next sections, we will examine the use of cohorts to improve decoding and to handle larger, open alphabets.
5 Open Alphabets, Hashing, and Cohorts
In practice, the set of values that may need to be collected may not be easily enumerable in advance, preventing a direct application of the binary and ary formulations of private distribution estimation. Consider a population of users, where each user possesses a symbol drawn from a large set of symbols whose membership is not known in advance. This scenario is common in practice; for example, in Chrome’s estimation of the distribution of home page settings (Erlingsson et al., 2014). Building on this intuitive example, we assume for the remainder of the paper that symbols are strings, but we note that the methods described are applicable to any hashable structures.
5.1 ORr: RR with hashing and cohorts
RR is effective for privatizing over known alphabets. Inspired by (Erlingsson et al., 2014), we extend RR to open alphabets by combining two primary intuitions: hashing and cohorts. Let be a function mapping with a low collision rate, i.e. with very low probability for . With hashing, we could use RR to guarantee local privacy over an alphabet of size by having each client report . However, as we will see, hashing alone is not enough to provide high utility because of the increased rate of collisions introduced by the modulus.
Complementing hashing, we also apply the idea of hash cohorts: each user is assigned to a cohort sampled i.i.d. from the uniform distribution over . Each cohort provides an independent view of the underlying distribution of strings by projecting the space of strings onto a smaller space of symbols using an independent hash function . The users in a cohort use their cohort’s hash function to partition into disjoint subsets by computing . Each subset contains approximately the same number of strings, and because each cohort uses a different hash function, the induced partitions for different cohorts are orthogonal: even when .
5.1.1 Encoding and Decoding
For encoding, the ORR privatization mechanism can be viewed as a sampling distribution independent of . Therefore, is given by
(15) 
For decoding, fix candidate set and interpret the privatization mechanism as a rowstochastic matrix:
(16) 
where:
(17) 
Note that is a sparse binary matrix encoding the hashed outputs for each cohort, wherein each column of has exactly nonzero entries.
Now is the expected output distribution for true probability vector , allowing us to form an empirical estimator by using standard leastsquares techniques to solve the linear system:
(18) 
Note that when and
is the identity matrix, (
18) reduces to standard RR empirical estimator as seen in (4.1).5.2 ORappor
Rappor also extends from ary alphabets to open alphabets using hashing and cohorts (Erlingsson et al., 2014); we refer to this extension herein as ORappor. However, the Rappor mechanism uses a size input representation as opposed to RR’s size representation. Taking advantage of the larger input space, ORappor uses an independent hash Bloom filter for each cohort before applying the Rappor mechanism—i.e. the th bit of is 1 if for any , where are a set of mutually independent hash functions modulo .
Decoding for ORappor is described in (Erlingsson et al., 2014)
and follows a similar strategy as for ORR. However, because this paper focuses on distribution estimation rather than heavy hitter detection, we eliminate both the Lasso regression stage and filtering of imputed frequencies relative to Bonferroni corrected thresholds, retaining just the regular leastsquares regression.
5.3 Simulation Analysis
We ran simulations of ORR and ORappor for users with input drawn from an alphabet of symbols under a geometric distribution with mean= (see Supplementary Figure 4). As described in Section 4.4, the geometric distribution is representative of actual data and relatively easy for Rappor and challenging for RR. Free parameters were set to minimize the median loss. Similar results for and and are included in the Supplementary Material.
In Figure 2(a), we see that under these conditions, ORR matches the utility of ORappor in both the very low and high privacy regimes and exceeds the utility of ORappor over midrange privacy settings.
For ORR, we find that the optimal depends directly on , that increasing consistently improves performance in the lowtomid privacy regime, and that noticably underperforms across the range of privacy levels. For ORappor, we find that performance improves as increases (with near the asymptotic limit), that noticably underperforms across the range of privacy values, but with all performing indistinguishably. Finally, we find that the optimal value for is consistently 1, indicating that Bloom filters provide no utility improvement beyond simple hashing. See Supplementary Figure 11 for details.
5.4 Improved Utility for Closed Alphabets
ORR and ORappor extend ary mechanisms to open alphabets through the use of hash functions and cohorts. These same mechanisms may also be applied to closed alphabets known a priori. While direct application is possible, the reliance on hash functions exposes both mechanism to unnecessary risk of hash collision.
Instead, we modify the ORR and ORappor mechanisms, replacing each cohort’s generic hash functions with minimal perfect hash functions mapping to before applying the modulo operation. In most closedalphabet applications, , in which case these minimal perfect hash functions are simply permutations. Also note that in this setting, ORR and and ORappor reduce to exactly their ary counterparts when and are both 1 except that the output symbols are permuted.
In Figure 2(b), we evaluate these modified mechanisms using the same method described in Section 5.3 (note that the utilities of Rappor and ORappor are nearly indistinguishable). ORappor benefits little from the introduction of minimal perfect hash functions. In contrast, ORR’s utility improves significantly, meeting or exceeding the utility of all other mechanisms at all considered .
6 Conclusion
Data improves products, services, and our understanding of the world. But its collection comes with risks to the individuals represented in the data as well as to the institutions responsible for the data’s stewardship. This paper’s focus on distribution estimation under local privacy takes one step toward a world where the benefits of datadriven insights are decoupled from the collection of raw data. Our new theoretical and empirical results show that combining cohortstyle hashing with the ary extension of the classical randomized response mechanism admits practical, state of the art results for locally private logging.
In many applications, data is collected to enable the making of a specific decision. In such settings, the nature of the decision frequently determines the required level of utility, and the number of reports to be collected is predetermined by the size of the existing user base. Thus, the differential privacy practitioner’s role is often to offer users as much privacy as possible while still extracting sufficient utility at the given . Our results suggest that ORR may play a crucial role for such a practitioner, offering a single mechanism that provides maximal privacy at any desired utility level simply by adjusting the mechanism’s parameters.
In future work, we plan to examine estimation of nonstationary distributions as they change over time, a common scenario in data logged from user interactions. We will also consider what utility improvements may be possible when some responses need more privacy than others, another common scenario in practice. Much more work remains before we can dispel the collection of unnoised data altogether.
Acknowledgements. Thanks to Úlfar Erlingsson, Ilya Mironov, and Andrey Zhmoginov for their comments on drafts of this paper.
References
 Bassily & Smith (2015) Bassily, Raef and Smith, Adam. Local, private, efficient protocols for succinct histograms. arXiv preprint arXiv:1504.04686, 2015.
 Blocki et al. (2016) Blocki, Jeremiah, Datta, Anupam, and Bonneau, Joseph. Differentially private password frequency lists. 2016.
 Boyd & Vandenberghe (2004) Boyd, Stephen and Vandenberghe, Lieven. Convex optimization. Cambridge university press, 2004.
 Chan et al. (2012) Chan, TH Hubert, Li, Mingfei, Shi, Elaine, and Xu, Wenchang. Differentially private continual monitoring of heavy hitters from distributed streams. In Privacy Enhancing Technologies, pp. 140–159. Springer, 2012.
 Diakonikolas et al. (2015) Diakonikolas, Ilias, Hardt, Moritz, and Schmidt, Ludwig. Differentially private learning of structured discrete distributions. In Advances in Neural Information Processing Systems, pp. 2557–2565, 2015.
 Duchi et al. (2013a) Duchi, John, Wainwright, Martin J, and Jordan, Michael I. Local privacy and minimax bounds: Sharp rates for probability estimation. In Advances in Neural Information Processing Systems, pp. 1529–1537, 2013a.
 Duchi et al. (2013b) Duchi, John C, Jordan, Michael I, and Wainwright, Martin J. Local privacy, data processing inequalities, and statistical minimax rates. arXiv preprint arXiv:1302.3203, 2013b.
 Dwork (2006) Dwork, C. Differential privacy. In Automata, languages and programming, pp. 1–12. Springer, 2006.

Dwork & Lei (2009)
Dwork, C. and Lei, J.
Differential privacy and robust statistics.
In
Proceedings of the 41st annual ACM symposium on Theory of computing
, pp. 371–380. ACM, 2009.  Dwork et al. (2006) Dwork, C., McSherry, F., Nissim, K., and Smith, A. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, pp. 265–284. Springer, 2006.
 Dwork (2008) Dwork, Cynthia. Differential privacy: A survey of results. In Theory and applications of models of computation, pp. 1–19. Springer, 2008.
 Erlingsson et al. (2014) Erlingsson, Úlfar, Pihur, Vasyl, and Korolova, Aleksandra. Rappor: Randomized aggregatable privacypreserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 1054–1067. ACM, 2014.
 Hsu et al. (2012) Hsu, Justin, Khanna, Sanjeev, and Roth, Aaron. Distributed private heavy hitters. In Automata, Languages, and Programming, pp. 461–472. Springer, 2012.
 Kairouz et al. (2014) Kairouz, Peter, Oh, Sewoong, and Viswanath, Pramod. Extremal mechanisms for local differential privacy. In Advances in Neural Information Processing Systems, pp. 2879–2887, 2014.
 Kamath et al. (2015) Kamath, Sudeep, Orlitsky, Alon, Pichapati, Venkatadheeraj, and Suresh, Ananda Theertha. On learning distributions from their samples. In Proceedings of The 28th Conference on Learning Theory, pp. 1066–1100, 2015.
 Lehmann & Casella (1998) Lehmann, Erich Leo and Casella, George. Theory of point estimation, volume 31. Springer Science & Business Media, 1998.
 McCallum et al. (1998) McCallum, Andrew, Nigam, Kamal, et al. A comparison of event models for naive bayes text classification. In AAAI98 workshop on learning for text categorization, volume 752, pp. 41–48. Citeseer, 1998.
 Narayanan & Shmatikov (2008) Narayanan, Arvind and Shmatikov, Vitaly. Robust deanonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on, pp. 111–125. IEEE, 2008.
 Wang & CarreiraPerpiñán (2013) Wang, Weiran and CarreiraPerpiñán, Miguel Á. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. CoRR, abs/1309.1541, 2013. URL http://arxiv.org/abs/1309.1541.
 Warner (1965) Warner, Stanley L. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63–69, 1965.
Appendix A Proof of Theorem 2
As argued in the proof sketch of Theorem 2, it suffices to show that obeys the data processing inequality. Precisely, we need to show that for any row stochastic matrix , . Observe that this is equivalent to showing that , where is the minimax risk in the nonprivate setting.
Consider the set of all randomized estimators . Under randomized estimators, the minimax risk is given by
where the expectation is taken over the randomness in the observations and the randomness in . Under a differentially private mechanism , the minimax risk is given by
where the expectation is taken over the randomness in the private observations and the randomness in .
Assume that there exists a (potentially randomized) estimator that achieves . Consider the following randomized estimator: is first applied to individually and is then jointly applied to the outputs of . This estimator achieves a risk of . Therefore, .
If there is no estimator that can achieve , then there exists a sequence of (potentially randomized) estimators such that achieves the minimax risk. In other words, if represents the risk under , then . Using an argument similar to the one presented above, we get that . Taking the limit as goes to infinity on both sides, we get that . This finishes the proof.
Appendix B Proof of Proposition 3
Appendix C Proof of Proposition 4
Appendix D Proof of Proposition 5
We want to show that for all and all ,
(19) 
where is the empirical estimate of under RR, is the empirical estimate of under Rappor, and is the empirical estimator under Rappor.