## 1 Introduction

In the modern era of big data, data analyses play an important role in decision-making in healthcare, information technology, and government agencies. The growing availability of large-scale datasets and ease of data analysis, while beneficial to society, has created a severe crisis of reproducibility in science. In 2011, Bayer HealthCare reviewed 67 in-house projects and found that they could replicate fewer than 25 percent, and found that over two-thirds of the projects had major inconsistencies [oSEM19]. One major reason is that random noise in the data can often be mistaken for interesting signals, which does not lead to valid and reproducible results. This problem is particularly relevant when testing multiple hypotheses, when there is an increased chance of false discoveries based on noise in the data. For example, an analyst may conduct 250 hypothesis tests and find that 11 are significant at the 5% level. This may be exciting to the researcher who publishes a paper based on these findings, but elementary statistics suggests that (in expectation) 12.5 of those tests should be significant at that level purely by chance, even if the null hypotheses were all true. To avoid such problems, statisticians have developed tools for controlling overall error rates when performing multiple hypothesis tests.

In hypothesis testing problems, the *null hypothesis* of no interesting scientific discovery (e.g., a drug has no effect), is tested against the alternative hypothesis of a particular scientific theory being true (e.g., a drug has a particular effect). The significance of each test is measured by a *-value*

, which is the probability of the observed data occurring under the null hypothesis, and a hypothesis is

*rejected*if the corresponding -value is below some (fixed) significance level. Each rejection is called a

*discovery*, and a rejected hypothesis is called a

*false discovery*if the null hypothesis is actually true. When testing multiple hypotheses, the probability of a false discovery due to noise increases as more tests are performed. The problem of

*false discovery rate control*is to find a procedure for testing multiple hypotheses that takes in the -values of each test, and outputs a set of hypotheses to reject. The goal is to minimize the number (or fraction) of false discoveries, while maintaining high true positive rate (i.e., correct discoveries).

In many applications, the dataset may contain sensitive personal information, and the hypothesis testing procedure must be conducted in a privacy-preserving way. For example, in genome-wide association studys (GWAS), a large number of single-nucleotide polymorphisms (SNPs) are tested for an association with a disease simultaneously or adaptively. Previous work has shown that the statistical analysis of these datasets can lead to privacy concerns, and it is possible to identify an individual’s genotype when only minor allele frequencies are revealed [HSR08]. The field of differential privacy [DMNS06] offers data analysis tools that provide powerful worst-case privacy guarantees, and has become a de facto gold standard in privacy-preserving data analysis. Informally, an algorithm that is -differentially private ensures that any particular output of the algorithm is at most more likely when a single data point is changed. This parameterized privacy notion allows for a smooth tradeoff between accurate analysis and privacy to the individuals who have contributed data. In the past decade, researchers have developed a wide variety of differentially private algorithms for many statistical tasks; these tools have been implemented in practice at major organizations including Google [EPK14], Apple [Dif17], Microsoft [DKY17], and the U.S. Census Bureau [DLS17].

### 1.1 Related Work

The only prior work on differentially private false discovery rate control (FDR) [DSZ18] considers the traditional offline multiple testing problem, where an analyst has all the hypotheses and corresponding -values upfront. Their private procedure repeatedly applies the ReportNoisyMin mechanism [DR14] to the celebrated Benjamini-Hochberg (BH) procedure [BH95] in offline multiple testing to privately pre-screen the -values, and then applies the BH procedure again to select the significant -values. The (non-private) BH procedure first sorts all -values, and then sequentially compares them to an increasing threshold, where all -values below their (ranked and sequential) threshold are rejected. The ReportNoisyMin mechanism privatizes this procedure by repeatedly (and privately) finding the hypothesis with the lowest -value.

Although the work of [DSZ18] showed that it was possible to integrate differential privacy with FDR control in multiple hypothesis testing, the assumption of having all hypotheses and -values upfront is not reasonable in many practical settings. For example, a hospital may conduct multi-phase clinical trials where more patients join over time, or a marketing company may perform A/B testings sequentially. In this work, we focus on the more practical *online hypothesis testing problem*, where a stream of hypotheses arrive sequentially, and decisions to reject hypotheses must be made based on current and previous results before the next hypothesis arrives. This sequence of the hypotheses could be independent or adaptively chosen. Due to the fundamental difference between the offline and online FDR procedures, the method of [DSZ18] cannot be applied to the online setting.

The online multiple hypothesis testing problem was first investigated by [FS08], who proposed a framework known as *online alpha-investing procedure* that models the hypothesis testing problem as an investment problem. Some extensions based on this framework include generalized alpha-investing (GAI) [AR14], Level based On Recent Discovery (LORD) [JM15, JM18], and the current state-of-the-art for online FDR control, SAFFRON [RZWJ18] and ADDIS [TR19].
Further discussion of these approaches appears in Section 2.2.

### 1.2 Our Results

We develop a differentially private online FDR control procedure for multiple hypothesis testing, which takes a stream of -values and a target FDR level and privacy parameter , and outputs discoveries that can control the FDR at a certain level at any time point. Such a procedure provides unconditional differential privacy guarantees (to ensure that privacy will be protected even in the worst case) and satisfy the theoretical guarantees dictated by the FDR control problem.

Our algorithm, Private Alpha-investing P-value Rejecting Iterative sparse veKtor Algorithm (PAPRIKA, Algorithm 3), is presented in Section 3. Its privacy and accuracy guarantees are stated in Theorem 4 and 5, respectively. While the full proofs appear in the appendix, we describe the main ideas behind the algorithms and proofs in the surrounding prose. In Section 4, we provide a thorough empirical investigation of PAPRIKA.

## 2 Preliminaries

### 2.1 Background on Differential Privacy

Differential Privacy bounds the maximal amount that one data entry can change the output of the computation. Databases belong to the space and contain entries–one for each individual–where each entry belongs to data universe . We say that are *neighboring databases* if they differ in at most one data entry.

###### Definition 1 (Differential Privacy [Dmns06]).

An algorithm is *-differentially private* if for every pair of neighboring databases , and for every subset of possible outputs ,
.
If , we say that is -differentially private.

The *additive sensitivity* of a real-valued query is denoted , and is defined to be the maximum change in the function’s value that can be caused by changing a single entry. That is,

If

is a vector-valued query, the expression above can be modified with the appropriate norm in place of the absolute value. Differential privacy guarantees are often achieved by adding

*Laplace noise*at various places in the computation, where the noise scales with

. A Laplace random variable with parameter

is denoted, and has probability density function,

We may sometimes abuse notation and also use to denote the realization of a random variable with this distribution.

The SparseVector algorithm, first introduced by [DNPR10] and refined to its current form by [DR14], privately reports the outcomes of a potentially very large number of computations, provided that only a few are “significant.” It takes in a stream of queries, and releases a bit vector indicating whether or not each noisy query answer is above the fixed noisy threshold. We use this algorithm as a framework for our online private false discovery rate control algorithm as new hypotheses arrive online, and we only care about those “significant” hypotheses when the -value is below a certain threshold. We note that the standard presentation below checks for queries with values above a threshold, but by simply changing signs this framework can be used to check for values *below* a threshold, as we will do with the -values.

###### Theorem 1 ([Dnpr10]).

SparseVector is -differentially private.

###### Theorem 2 ([Dnpr10]).

For any sequence of queries with sensitivity such that , SparseVector outputs with probability at least a stream of such that for every with and for every with as long as .

Unlike the conventional use of additive sensitivity, [DSZ18] defined the notion of multiplicative sensitivity specifically for -values. It is motivated by the observation that, although the additive sensitivity of a -value may be large, the relative change of the -value on two neighboring datasets is stable unless the -value is very small. Using this alternative sensitivity notion means that preserving privacy for these -values only requires a small amount of noise.

###### Definition 2 (Multiplicative Sensitivity [Dsz18]).

A p-value function is said to be -multiplicative sensitive if for all neighboring databases and , either both or

Specifically, when is sufficiently small, then we can treat the logarithm of the -values as having additive sensitivity , and we only need to add noise that scales with , which may be much smaller than the noise required under the standard additive sensitivity notion.

### 2.2 Background on Online False Discovery Rate Control

In the online false discovery rate (FDR) control problem, a data analyst receives a stream of hypotheses on the database , or equivalently, a stream of -values . The analyst must pick a threshold at each time to reject the hypothesis when ; this threshold can depend on previous hypotheses and discoveries, and rejection must be decided before the next hypothesis arrives.

The error metric is the false discovery rate, formally defined as:

where is the (unknown to the analyst) set of hypotheses where the null hypothesis is true, and is the set of rejected hypotheses. We will also write these terms as a function of time to indicate their values after the first hypotheses: . The goal of FDR control is to guarantee that for any time , the FDR up to time is less than a pre-determined quantity .

Such a problem was first investigated by [FS08], who proposed a framework known as *online alpha-investing* that models the hypothesis testing problem as an investment problem. The analyst is endowed with an initial budget, can test hypotheses at a unit cost, and receives an additional reward for each discovery. The alpha-investing procedure ensures that the analysts always maintains an -fraction of their wealth, and can therefore continue testing future hypotheses indefinitely. Unfortunately, this approach only controls a slightly relaxed version of FDR, known as *mFDR*, which is given by
.
This approach was later extended to a class of generalized alpha-investing (GAI) rules [AR14]. One subclass of GAI rules, the Level based On Recent Discovery (LORD), was shown to have consistently good performance in practice [JM15, JM18]. The SAFFRON procedure, proposed by [RZWJ18]

, further improves the LORD procedures by adaptively estimating the proportion of true nulls. The SAFFRON procedure is the current state-of-the-art in online FDR control for multiple hypothesis testing.

To understand the main differences between the SAFFRON and the LORD procedures, we first introduce an oracle estimate of the FDP as . The numerator overestimates the number of false discoveries, so overestimates the FDP. The oracle estimator cannot be calculated since is unknown. LORD’s naive estimator is a natural overestimate of . The SAFFRON’s threshold sequence is based on a novel estimate of FDP as

(1) |

where is a sequence of user-chosen parameters in the interval , which can be a constant or a deterministic function of the information up to time . This is a much better estimator than LORD’s naive estimator . The SAFFRON estimator is a fairly tight estimate of , since intuitively has unit expectation under null hypotheses and is stochastically smaller than uniform under non-null hypotheses.

The SAFFRON algorithm is given formally in Algorithm 2. SAFFRON starts off with an error budget , which will be allocated to different tests over time. It never loses wealth when testing candidate -values with , and it earns back wealth of on every rejection except for the first. By construction, the SAFFRON algorithm controls to be less than at any time . The function for defining the sequence can be any coordinatewise non-decreasing function. For example, can be a deterministic sequence of constants, or , as in the case of alpha-investing. These values serve as a weak overestimate of . The algorithm first checks if a -value is below , and if so, adds it to the *candidate set* of hypotheses that may be rejected. It then computes the threshold based on current wealth, current size of the candidate set, and the number of rejections so far, and decides to reject the hypothesis if . It also takes in a non-increasing sequence of decay factors which sum to one. These decay factors serve to depreciate past wealth and ensure that the sum of the wealth budget is always below the desired level .

The SAFFRON algorithm requires that the input sequence of -values are not too correlated under the null hypothesis. This condition is formalized through a *filtration* on the sequence of candidacy and rejection decisions. Intuitively, this means that the sequence of hypotheses cannot be too adaptively chosen, otherwise the -values may become overly correlated and violate this condition. Denote by the indicator for rejection, and let be the indicator for candidacy. Define the filtration formed by the sequences of -fields , and let , where is an arbitrary function of the first indicators for rejections and candidacy. We say that the null

-values are conditionally super-uniformly distributed with respect to the filtration

if:(2) |

We note that independent -values is a special case of the conditional super-uniformity condition of (2). When -values are independent, they satisfy the following condition:

SAFFRON provides the following accuracy guarantees, where the first two conditions apply if -values are conditionally super-uniformly distributed, and the last two conditions apply if the -values are additionally independent under the null.

###### Theorem 3 ([Rzwj18]).

If the null -values are conditionally super-uniformly distributed, then we have:

(a) ;

(b) The condition for all implies that mFDR for all .

If the null -values are independent of each other and of the non-null -values, and and are coordinatewise non-decreasing functions of the vector , then

(c) for all ;

(d) The condition for all implies that for all .

## 3 Private online false discovery rate control

In this section, we provide our algorithm for private online false discovery rate control, Private Alpha-investing P-value Rejecting Iterative sparse veKtor Algorithm (PAPRIKA), given formally in Algorithm 3. It is a differentially private version of SAFFRON, where we use SparseVector to ensure privacy of our rejection set. However, the combination of these tools is far from immediate for several reasons. Although the complete proofs of our privacy and accuracy results appear in the appendix, we elaborate here on the algorithmic details and modifications needed to ensure privacy and FDR control.

Specifically, the SAFFRON algorithm decides to reject hypothesis if the corresponding -value is less than the rejection threshold ; that is, if . We instantiate the SparseVector framework in this setting, where plays the role of the query answer , and plays the role of the threshold. Note that SparseVector uses a single fixed threshold for all queries, while our algorithm PAPRIKA allows for a dynamic threshold. Our privacy analysis of the algorithm accounts for this change and shows that dynamic thresholds do not affect the privacy guarantees of SparseVector.

Similar to prior work on private offline FDR control [DSZ18], we use the notion of multiplicative sensitivity described in Definition 2, because -values may have high sensitivity and therefore require unacceptably large noise to be added to preserve privacy. We assume that our input stream of -values each has multiplicative sensitivity . As long as is small enough (i.e., less than the rejection threshold), we can treat the logarithm of the -values as the queries with additive sensitivity . Because of this change, we must make rejection decisions based on the logarithm of the -values, so our reject condition is for Laplace noise terms drawn from the appropriate distributions.

The accuracy guarantees of SparseVector ensure that if a value is reported to be below threshold, then with high probability it will not be more than above the threshold. However, to ensure that our algorithm satisfies the desired bound , we require that reports of “below threshold” truly do correspond to -values that are below the desired threshold . To accommodate this, we shift our rejection threshold down by a parameter . is chosen such that the algorithm to satisfy -differential privacy, but the choice can be seen as inspired by the -accuracy term of SparseVector as given in Theorem 2. Therefore our final reject condition is . This ensures that “below threshold” reports are below with high probability. Empirically, we see that the bound of in Theorem 4 may be overly conservative and lead to no hypotheses being rejected, so we allow an additional scaling parameter that will scale the magnitude of shift by a factor of . The bounds of Theorem 4 correspond to , but in many scenarios choosing a smaller value of or will lead to better performance while still satisfy the privacy guarantee. Further analysis of how to chose this shift parameter is given in Section 4.3.

Even with these modifications, a naive combination of SparseVector and SAFFRON would still not satisfy differential privacy. This is due to the *candidacy indicator* step of the algorithm. In the SAFFRON algorithm, a pre-processing candidacy step occurs before any rejection decisions. This step checks whether each -value is smaller than a loose upper bound on the eventual reject threshold . The algorithm chooses using an -investing rule that depends on the number of candidate hypotheses seen so far, and ensures that , so only hypotheses in this candidate set can be rejected. These values are used to control as defined in Equation (1), which serves as a conservative overestimate of FDP. (For a discussion of how to choose , see Lemma 1 or our experimental results in Section 4. Reasonable choices would be or a small constant such as .)

Without adding noise to the candidacy condition, there may be neighboring databases with -values for some hypothesis such that , and hence the hypothesis would have positive probability of being rejected under the first database and zero probability of rejection under the neighbor. This would violate the -differential privacy guarantee intended under SparseVector. If we were to privatize the condition for candidacy using, for example, a parallel instantiation of SparseVector, then we would have to reuse the same realizations of the noise when computing the rejection threshold to still control FDP, but this would no longer be differentially private.

Since we cannot add noise to the candidacy condition, in PAPRIKA we instead make our candidacy condition even weaker, to be . Then if a hypothesis has different candidacy results under neighboring databases and the multiplicative sensitivity is small, then the hypothesis is still extremely unlikely to be rejected even under the database for which it was candidate. To see this, consider a pair of neighboring databases that induce -values such that . Due to the multiplicative sensitivity constraint, we know that . Plugging this into the rejection condition , we see that we would need the difference of the noise terms to satisfy , which by analysis of the Laplace distribution, will happen with exponentially small probability in when .^{1}^{1}1Such values of are typical; see examples in Section 4 where . The shift term also has dependence on which contributes to the bound. Our PAPRIKA algorithm is thus -differentially private, and we account for this failure probability in our (exponentially small) parameter, as stated in Theorem 4.

Our algorithm also controls at each time ,

(3) |

We note that this is equivalent to in Equation (1) by scaling down by a factor of 2. By analyzing and bounding this expression, we achieve FDR bounds for our PAPRIKA algorithm, as stated in Theorem 5.

###### Theorem 4.

For any stream of p-values , PAPRIKA() is -differentially private.

As a starting point, our privacy comes from SparseVector, but as discussed above, many crucial modifications are required. To briefly summarize the key considerations, we must handle different thresholds at different times, multiplicative rather than additive sensitivity, a modified notion of the candidate set, and introducing a small delta parameter to account for the new candidate set definition and the shift. The proof of Theorem 4 appears in Appendix A.

Next we describe the theoretical guarantees of FDR control for our private algorithm PAPRIKA which is an analog of Theorem 3. We need to slightly modify the conditional super-uniformity assumption given in (2) to incorporate the added Laplace noise. Let be the rejection decisions, and let be the indicators for candidacy. We let , where is an arbitrary function of the first indicators for rejections and candidacy. Define the filtration formed by the sequences of -fields . We say that the null -values are conditionally super-uniformly distributed with respect to the filtration if:

(4) |

Our FDR control guarantees for PAPRIKA mirror those of SAFFRON (Theorem 3). The first two conditions apply if -values are conditionally super-uniformly distributed, and the last two conditions apply if the -values are additionally independent under the null.

###### Theorem 5.

If the null -values are conditionally super-uniformly distributed, then we have:

(a) ;

(b)The condition for all implies that mFDR for all .

If the null -values are independent of each other and of the non-null -values, and and are coordinate-wise non-decreasing functions of the vector , then

(c) for all ;

(d) The condition for all implies that for all .

This statement can be compared with the guarantees of SAFFRON (Theorem 3). Similar to before, the first two conditions provide a bound on mFDR, whereas the latter two conditions bound FDR when we have independence of the -values. In contrast to the non-private guarantees, we have a slack term of in the FDR bound. However, since will generally be cryptographically small in most applications, this will be have a negligible effect on the FDR. The proof of Theorem 5 appears in Appendix B.

The following lemma is a key tool in the proof of Theorem 5. Though it is qualitatively similar to Lemma 2 in [RZWJ18], it is crucially modified to show an analogous statement holds under the addition of Laplace noise. Its proof appears in Section C.

###### Lemma 1.

Assume are independent of each other and let to be any coordinate-wise non-decreasing function. Assume and are coordinate-wise non-decreasing functions and that and . Then for any such that , we have

and

## 4 Experiments

In this section, we provide experimental results that compare the performance of variations of the PAPRIKA and SAFFRON procedures. In particular, we evaluate the FDR and the statistical power of each algorithm under two different sequences of : one uses a constant sequence , and the other one sets . We refer to the latter case with an AI suffix (PAPRIKA AI and SAFFRON AI, respectively) to indicate that the choice of corresponds to the Alpha Investing (AI) rule. We generally observe that, even under moderately stringent privacy restrictions, PAPRIKA’s performance is comparable to that of the non-private alternatives. Throughout our experiments, we set , and all the results are averaged over runs.

We investigate two settings: in Section 4.1

, the observations come Bernoulli distributions, and in Section

4.2, the observations are generated from truncated exponential distributions. In Section

4.3, we discuss our choice of the shift parameter and give guidance on how to choose this parameter in practice. Code for PAPRIKA and our experiments is available at https://github.com/wanrongz/PAPRIKA.### 4.1 Testing with Bernoulli Observations

In this setting, we assume that we have individuals in a database , and that each individual’s data contains independent entries. The th entries are associated with i.i.d. Bernoulli variables , each of which takes the value with probability , and takes the value otherwise. Let be the sum of the th entries. A -value for testing null hypothesis against is given by,

[DSZ18] showed that is -multiplicatively sensitive for and , where and is any small positive constant. We choose for our experiments as follows:

where we vary the parameter , corresponding to the expected fraction of non-nulls.

In the following experiments, we sequentially test versus for . We use as the size of the database , and as the number of entries as well as the number of hypotheses. Our experiments are run under several different shifts , but we only report results with (i.e., when ). Further discussion on the choice of is deferred to Section 4.3. The results are summarized in Figure 1, which plots the false discovery rate (FDR) against the expected fraction of non-nulls, . We evaluate the performance of PAPRIKA under two different sequences of : and , denoted by PAPRIKA AI and PAPRIKA, respectively. The non-private baseline methods are LORD in [JM15, JM18], Alpha-investing in [AR14], and SAFFRON and SAFFRON AI in [RZWJ18]. In Figure 1(a) and (b), we compare our algorithm with the baseline methods with privacy parameter . In Figure 1(c,d) and (e,f), we compare the performance of PAPRIKA AI and PAPRIKA, respectively, with varying privacy parameters .

As expected, the performance of PAPRIKA generally diminishes as decreases. A notable exception is that FDR also decreases in Figure 1(c). This phenomenon is because we set , resulting in a smaller candidacy set and leading to insufficient rejections. Surprisingly, PAPRIKA AI also gives a lower FDR as compared with other non-private algorithms (Figure 1(a)), since it tends to make fewer rejections. We also see that PAPRIKA AI performs dramatically better than PAPRIKA, suggesting that the choice of setting is a natural choice to ensure good performance in practice.

### 4.2 Testing with Truncated Exponential Observations

In this section, we also assume that have individuals in the database . Each individual’s data contains independent entries. Here the th entries are associated with i.i.d. truncated exponential distributed variables , each of which is sampled according to the density function,

for positive parameters and . Let be the realized sum of the th entries, and let denote the random variable of the sum th entries, corresponding to the truncated exponential distributed variables. A -value for testing the null hypothesis against the alternative hypothesis is given by,

[DSZ18] showed that is -multiplicatively sensitive for and , where and is any small positive constant.

In the following experiments, we generate our database using the exponential distribution model truncated at . We set as follows:

where we vary the parameter , corresponding to the expected fraction of non-nulls.

We sequentially test versus for . We use as the size of the database , and as the number of entries as well as the number of hypotheses. We note that there is no closed form to compute the -values, however, the sum of

i.i.d. samples is approximately normally distributed by the Central Limit Theorem. The expectation and the variance of

with arerespectively. Therefore, is approximately distributed as , and we compute the -values accordingly. We run the experiments with shift (shift magnitude ). The results are summarized in Figure 2.

All of the methods perform well in this setting. To illustrate the process, we also plot the rejection threshold and wealth versus the hypothesis index in Figure 3. Each “jump” of the wealth corresponds to a rejection. We observe that the rejections of our private algorithms are consistent with the rejections of the non-private algorithms, another perspective which empirically confirms their accuracy.

One explanation for the good performance observed in Figure 2 could be that the signal between the null and alternative hypotheses as parameterized by could be very strong, which means the algorithms could easily discriminate between the true null and true non-null hypotheses based on the observed -values. To measure this, we also varied the value of in the alternative hypotheses. These results are shown in Figure 4, which plots FDR and power of PAPRIKA and PAPRIKA AI with when the alternative hypotheses have parameter . As expected, the performance gets better as we increase the signal, and we observe that when the signal is too weak (), performance begins to decline.

### 4.3 Choice of shift

We now discuss how to choose the shift parameter . Theorem 4 gives a theoretical lower bound for in terms of the privacy parameter , but this bound may be overly conservative. Since the shift is closely related to the performance of FDR and statistical power, we wish to pick a value of that yields good performance in practice. In Theorem 5, we show that FDR is less than our desired bound plus the privacy parameter , which naturally requires that the privacy loss parameter be small.

We use the Bernoulli example in Section 4.1 to investigate the performance under different choices of the shift with privacy parameter . The results are summarized in Figure 5, which plots the FDR and power versus the expected fraction of non-nulls when we vary the shift size with .

Larger shifts (corresponding to larger values of ) will lower the rejection threshold, which causes fewer hypotheses to be rejected. This improves FDR of the algorithm, but harms Power, as the threshold may be too low to reject true nulls. Figure 5 shows that the shift size parameter should be chosen by the analyst to balance the tradeoff between FDR and Power, as demanded by the application.

## References

- [AR14] Ehud Aharoni and Saharon Rosset. Generalized -investing: definitions, optimality results and application to public databases. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(4):771–794, 2014.
- [BH95] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1):289–300, 1995.
- [Dif17] Differential Privacy Team, Apple. Learning with privacy at scale. https://machinelearning.apple.com/docs/learning-with-privacy-at-scale/appledifferentialprivacysystem.pdf, December 2017.
- [DKY17] Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. In Advances in Neural Information Processing Systems 30, NIPS ’17, pages 3571–3580. Curran Associates, Inc., 2017.
- [DLS17] Aref N. Dajani, Amy D. Lauger, Phyllis E. Singer, Daniel Kifer, Jerome P. Reiter, Ashwin Machanavajjhala, Simson L. Garfinkel, Scot A. Dahl, Matthew Graham, Vishesh Karwa, Hang Kim, Philip Lelerc, Ian M. Schmutte, William N. Sexton, Lars Vilhuber, and John M. Abowd. The modernization of statistical disclosure limitation at the U.S. census bureau, 2017. Presented at the September 2017 meeting of the Census Scientific Advisory Committee.
- [DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Conference on Theory of Cryptography, TCC ’06, pages 265–284, 2006.
- [DNPR10] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC ’10, pages 715–724, 2010.
- [DR14] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014.
- [DSZ18] Cynthia Dwork, Weijie J Su, and Li Zhang. Differentially private false discovery rate control. arXiv preprint arXiv:1807.04209, 2018.
- [EPK14] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM Conference on Computer and Communications Security, CCS ’14, pages 1054–1067, New York, NY, USA, 2014. ACM.
- [FS08] Dean P Foster and Robert A Stine. -investing: a procedure for sequential control of expected false discoveries. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(2):429–444, 2008.
- [HSR08] Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John V Pearson, Dietrich A Stephan, Stanley F Nelson, and David W Craig. Resolving individuals contributing trace amounts of dna to highly complex mixtures using high-density snp genotyping microarrays. PLoS genetics, 4(8):e1000167, 2008.
- [JM15] Adel Javanmard and Andrea Montanari. On online control of false discovery rate. arXiv preprint arXiv:1502.06197, 2015.
- [JM18] Adel Javanmard and Andrea Montanari. Online rules for control of false discovery rate and false discovery exceedance. The Annals of Statistics, 46(2):526–554, 2018.
- [oSEM19] National Academies of Sciences Engineering and Medicine. Reproducibility and replicability in science. Washington, DC: The National Academies Press, 2019.
- [RZWJ18] Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, and Michael Jordan. SAFFRON: an adaptive algorithm for online control of the false discovery rate. arXiv preprint arXiv:1802.09098, 2018.
- [TR19] Jinjin Tian and Aaditya Ramdas. ADDIS: An adaptive discarding algorithm for online FDR control with conservative nulls. In Advances in Neural Information Processing Systems 32, NeurIPS ’19, pages 9383–9391. Curran Associates, Inc., 2019.

## Appendix A Proof of Theorem 4

Before proving Theorem 4, we will state and prove the following lemma, which will be useful in the proofs of Theorem 4 and Theorem 5.

###### Lemma 2.

If , and is a constant, we have .

###### Proof.

∎

See 4

###### Proof.

Fix any two neighboring databases and . Let denote the random variable representing the output of PAPRIKA() and let denote the random variable representing the output of PAPRIKA(). Let denote the total number of hypotheses. When and for all , . When and for all , privacy follows from the privacy of SparseVector. For other cases, the worst case is that for all , and . In this setting, we have

To satisfy -differential privacy, we need to bound the probability of outputting for database . We first consider . We wish to bound and . The latter is trivial since , which is greater than 1. It remains to satisfy , which is equivalent to . We have

(5) | ||||

(6) | ||||

(7) |

where Inequality (5) is because the worst case happens when is below the candidacy threshold , Equation (6) applies Lemma 2, and Inequality (7) follows from the facts that for all and that the third term in (6) is positive. Setting (7) to be larger than , we have,

Comments

There are no comments yet.