Private Sequential Hypothesis Testing for Statisticians: Privacy, Error Rates, and Sample Size

The sequential hypothesis testing problem is a class of statistical analyses where the sample size is not fixed in advance. Instead, the decision-process takes in new observations sequentially to make real-time decisions for testing an alternative hypothesis against a null hypothesis until some stopping criterion is satisfied. In many common applications of sequential hypothesis testing, the data can be highly sensitive and may require privacy protection; for example, sequential hypothesis testing is used in clinical trials, where doctors sequentially collect data from patients and must determine when to stop recruiting patients and whether the treatment is effective. The field of differential privacy has been developed to offer data analysis tools with strong privacy guarantees, and has been commonly applied to machine learning and statistical tasks. In this work, we study the sequential hypothesis testing problem under a slight variant of differential privacy, known as Renyi differential privacy. We present a new private algorithm based on Wald's Sequential Probability Ratio Test (SPRT) that also gives strong theoretical privacy guarantees. We provide theoretical analysis on statistical performance measured by Type I and Type II error as well as the expected sample size. We also empirically validate our theoretical results on several synthetic databases, showing that our algorithms also perform well in practice. Unlike previous work in private hypothesis testing that focused only on the classical fixed sample setting, our results in the sequential setting allow a conclusion to be reached much earlier, and thus saving the cost of collecting additional samples.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/24/2019

Hypothesis Testing Interpretations and Renyi Differential Privacy

Differential privacy is the gold standard in data privacy, with applicat...
05/07/2022

Private Hypothesis Testing for Social Sciences

While running any experiment, we often have to consider the statistical ...
09/20/2021

The power of private likelihood-ratio tests for goodness-of-fit in frequency tables

Privacy-protecting data analysis investigates statistical methods under ...
04/24/2015

Local Variation as a Statistical Hypothesis Test

The goal of image oversegmentation is to divide an image into several pi...
11/30/2020

On Error Exponents of Almost-Fixed-Length Channel Codes and Hypothesis Tests

We examine a new class of channel coding strategies, and hypothesis test...
02/23/2020

Hypothesis testing for eigenspaces of covariance matrix

Eigenspaces of covariance matrices play an important role in statistical...
11/27/2018

The Structure of Optimal Private Tests for Simple Hypotheses

Hypothesis testing plays a central role in statistical inference, and is...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Hypothesis testing is a fundamental task in statistics and machine learning, and involves testing a null hypothesis against an alternative hypothesis

, given observed data. For the usual statistical hypothesis tests, the sample size is fixed before the data are collected, but for a sequential test we observe

streaming data

, where the total sample size depends on the data and is thus a random variable. Sequential hypothesis testing is valuable because it may enable a decision to be reached earlier than with a fixed sample size test, which is critical when waiting for additional samples is costly.

The most prominent algorithm for sequential hypothesis testing is the Sequential Probability Ratio Test (SPRT) initially developed by [Wal45] for efficient testing of anti-aircraft gunnery during World War II, and later used in the design of fully sequential clinical trials [Arm50, Arm54]. This algorithm continuously monitors the log-likelihood ratio of the observed data under the alternative and under the null hypotheses, and halts as soon as this ratio takes a value that is either very large or very small, reflecting that one hypothesis is overwhelmingly more likely than the other, given the observed data. The analyst running SPRT can choose these thresholds to trade-off her desired confidence in her final decision with making decisions quickly (with respect to the number of samples). In modern day, SPRT and other techniques for sequential hypothesis testing are widely used for many real-world applications, including clinical trials and quality control [Wal04, Sie13, Whi97, GS91].

Performance of a sequential testing procedure is evaluated using four main criteria: two operating characteristic (OC) functions to describe the accuracy of final decisions, and two average sample number (ASN) functions to describe how quickly a decision was reached. The two OC criteria are the probability of Type I error,

and the probability of Type II error, . Since the number of observations is a random variable, the two ASN functions are the expected sample size under the null and alternative hypotheses, and , respectively. [WW48] showed that the sequential probability ratio test (SPRT) is the optimal test of testing a simple null against a simple alternative when observations are assumed to be sampled i.i.d., where optimality is defined as simultaneously minimizing both and subject to constraints on Type I and Type II error probabilities.

In modern applications of sequential hypothesis testing — for example to medical clinical trials — privacy also becomes another crucial performance criterion, as the data and decisions can be highly sensitive. The field of differential privacy [DMNS06] has emerged as the gold standard in private data analysis by providing algorithms with strong worst-case privacy guarantees. It is a parameterized privacy notion, where the privacy parameter allows for a smooth tradeoff between accuracy of the analysis and privacy to the individuals in the database. Informally, an algorithm is -differentially private if it ensures that any particular output of the algorithm is at most more likely when a single user’s data are changed. In recent years, tools for differentially private data analysis have been deployed in practice by major organizations such as Google [EPK14], Apple [Dif17], Microsoft [DKY17], and the U.S. Census Bureau [DLS17].

In this work, we provide the first differentially private algorithm for the sequential hypothesis testing problem with theoretical guarantees on the Type I and Type II error, and the expected sample size. By focusing on the metrics most relevant to the field of statistics and its practitioners, our work may be more readily deployed in practice. One real-world application of our results is the design of statistically valid sequential experiments and clinical trials before data are collected or observed. Typically when designing sequential experiments, a scientist must develop and pre-register a well-justified protocol for making final decisions under all possible data outcomes, and no further adjustments to the protocol can be made once data collection has begun. Fully sequential design of clinical trials, as suggested by [Arm50, Arm54], where evaluation occurs after each new patient outcome was not always possible for statistical or practical reasons – e.g., it is difficult to convene a data and safety monitoring committee after each observation. With recent advancements in statistics and computing, it has become feasible to continuously monitor and evaluate every patient [Whi97]. Modern examples of fully sequential trials include the “MADIT” clinical trial to evaluate the effect of an implanted defibrillator [DeM98] and a COVID-19 therapeutics trial intended to speed up the decision process [Har20]. Fully sequential trials risk leaking patient’s sensitive information, especially for patients with data collected shortly before the trial is halted. Our proposed private sequential test can be used for monitoring trials where privacy protection is necessary, such as those with irreversible clinical outcomes like death or severe infectious disease. It can also balance the tradeoff between small expected sample sizes for rapid decision, controlled Type I and Type II error properties, and formal privacy protections.

1.1 Our contribution

In this work, we combine tools from differential privacy with classical statistical methods for sequential hypothesis testing to develop a private version of Wald’s SPRT, which we call PrivSPRT.

The most natural existing tool for privatizing Wald’s SPRT is a private subroutine called AboveThresh [DNR09, DR14] (also known as SparseVector). This algorithm takes in a database and a stream of queries , and sequentially privately tests whether the numerical value of each query evaluated on the database is above or below a pre-specified threshold. A natural first attempt at a private version of SPRT would be to instantiate AboveThresh

 using the SPRT test statistic as the query and using the SPRT stopping criteria as the threshold (see Section

2.1 for more details). However, as we show in Section 4, the random noise internal to AboveThresh that is used to guarantee privacy causes extremely poor performance in terms of the relevant OC and ASN metrics. In particular, we note that while AboveThresh was designed to provide good performance with respect to high-probability finite-sample performance guarantees that are commonly used in the computer science literature, it fails to provide good performance on the metrics that are most relevant to the statistics community, such as Type I and Type II error.

We instead build our algorithm PrivSPRT using a generalized version of AboveThresh from [ZW20] instantiated with Gaussian noise (rather than Laplace as in [DNR09]), and we show that this modification results in good performance in terms of the OC and ASN metrics of interest. Specifically, we give bounds on the expected sample size of PrivSPRT (Theorem 8) and the Type I and Type II error (Theorem 9). We analyze the privacy of PrivSPRT through a generalization of DP known as Renyi differential privacy (RDP) [Mir17], which is often preferred in practice due to its tighter composition properties with Gaussian noise [WBK19]. We show that PrivSPRT satisfies RDP (Theorem 7), which also implies that is satisfies DP (Theorem 4). Finally, we perform experiments to empirically validate our theoretical findings (Section 4).

1.2 Related work

Background on non-private SPRT was presented earlier in this section, so we focus our attention here on private hypothesis testing. Private (fixed-sample-size) hypothesis testing has previously been considered in the static setting, where the analyst wishes to test a hypothesis (or family of hypothesis) at a single point in time for a fixed database [GLRV16, GR18, She18, CKS19, CKM19]. Dynamic or online private sequential decision making has recently gained traction in various settings, including recent work on private sequential change-point detection [CKM18, CKLZ20, ZKT21]. These works all rely on the AboveThresh/SparseVector

technique to achieve privacy in sequential change-point problems, where the focus is on the privacy of parameter estimation of change-point. Our work deals with the sequential hypothesis testing problem which is essentially a classification problem, and our aim is to provide a unifying approach by showing that a generalization of this technique can be applied to solve general private sequential hypothesis testing problems for a more general class of accuracy objectives.

[WSMD20] considers privatization of SPRT. Their algorithm is to add Laplace noise to the thresholds to generate a noisy stopping time, and then use exponential mechanism to output the binary decision. They show that the algorithm can provide a weaker notion of privacy that is data dependent, and it will only converge to DP when the stopping time goes to . In contrast, our results aim to minimize stopping time, and therefore, a direct comparison would not be applicable.

2 Preliminaries

This section provides the background on sequential hypothesis testing (Section 2.1) and the differentially private tools (Section 2.2) that will be brought to bear in our PrivSPRT algorithm.

2.1 Sequential hypothesis testing

A sequence of data points, are observed sequentially, i.e., arriving one at a time. Let

denote the true joint probability density function (pdf) of the first

observations, . Under the simplest model where the data points are sampled i.i.d. from some distribution , then . In more general dependence models, .

In sequential hypothesis testing problems, the analyst has two possible hypotheses on the pdfs – and – and her goal is to quickly (i.e., with as few samples as possible) and correctly test the null hypotheses against the alternative .111For simplicity in the remainder of this paper, we will abuse notation to use the subscripts and to indicate probability with respect to the distributions given in and , respectively. At each time , the analyst must make one of the following three decisions: (1) halt collecting observations and accept the null hypothesis , (2) halt collecting observations and reject the null hypothesis or (3) continue collecting observations to provide additional information.

There are four main criteria to assess the performance of sequential tests, including two operating characteristic (OC) functions and two average sample number (ASN) functions [Wal04, Sie13]. The two OC functions are Type I error, (i.e., rejecting when is true), and Type II error, (i.e., accepting when is true), which address correct decision-making, and are well-studied in the standard classification or hypothesis testing contexts. The two ASN functions are the expected sample size under both the null and alternative hypotheses, i.e., and , which ensure that decisions are made efficiently and that unnecessary costs are not incurred by collecting too many samples. In sequential hypothesis testing problems, the objective is to simultaneously minimize and subject to the constraints that Type I and Type II error probabilities are both small.

Wald’s sequential probability ratio test (SPRT) [Wal45] is a celebrated optimal solution when testing a simple null

, where the joint distribution is completely specified,

against a simple alternative under the simplest i.i.d. model, where the data are independent and identically distributed. The idea behind SPRT is straightforward: the analyst continues to collect observations until she has enough evidence to confidently decide whether or is true, as measured by the cumulative log-likelihood ratio statistic being either too large or too small. Mathematically, at each time , the analyst calculates the cumulative log-likelihood ratio statistic: Under the i.i.d. model, this test statistic becomes: Moreover, the analyst chooses two positive constants , and runs the SPRT test until the following stopping time is reached: After reaching the stopping criterion, a statistical decision is made based on the following rule:

Intuitively, the set is the range of test statistics where the analyst is uncertain between and . If the test statistic ever falls outside of this range, then the analyst can have high confidence about one of the hypotheses being true. Under the i.i.d. model, the SPRT is exactly optimal in the sense of minimizing both expected sample sizes, and , simultaneously, among all other (sequential or fixed-sample size) tests whose Type I and Type II error probabilities are same as (or smaller than) those of the SPRT [WW48]. Below we denote if if (or if ).

Theorem 1 (Error Rates [Wal45]).

The approximation of Type I error of SPRT is , and the approximation of the Type II error of SPRT is .

The additional assumption that the observations are independent and identically distributed is required to give the expected sample size.

Theorem 2 (Expected Sample Size [Wal45]).

When are sampled i.i.d., SPRT has expected samples sizes:

(1)
(2)

2.2 Differential privacy

Differential privacy is a statistical notion of database privacy, which ensures that the output of an algorithm will still have approximately the same distribution is a single data entry were to be changed. Differential privacy considers a general database space . If databases are real-valued and contain a fixed number of entries, then ; in our sequential hypothesis testing setting, our database will be of a random size so . Two databases are said to be neighboring if they differ in at most one entry.

Definition 1 (Differential Privacy [Dmns06]).

A randomized algorithm is -differentially private if for every pair of neighboring databases , and for every subset of possible outputs ,

Renyi differential privacy (RDP) is a relaxation of differential privacy based on the Renyi divergence, defined as . This privacy notion requires that the distribution over outputs on two neighboring databases is close in Renyi divergence.

Definition 2 (Renyi Differential Privacy [Mir17]).

A randomized algorithm is -RDP with order , if for neighboring datasets it holds that

Renyi differential privacy is desirable for its straightforward composition, meaning that the privacy parameters degrade gracefully as additional computations are performed on the data, even when the private mechanisms are chosen adaptively. This allows us to design RDP mechanisms using simple private building blocks.

Theorem 3 (Basic RDP Composition [Mir17]).

Let is -RDP and is -RDP, then the mechanism defined as satisfies -RDP.

While DP also satisfies its own variant of composition, RDP is especially amenable to composition of Gaussian noise mechanisms. We can also easily translate between the notions of RDP and DP because any -RDP mechanism is also -differential privacy for , as shown below in Theorem 4. Thus when running multiple RDP mechanisms, a common approach is to first perform RDP composition across the mechanisms and then translate the RDP guarantee into one of differential privacy.

Theorem 4 (From RDP to DP [Mir17]).

If is -RDP, then it is also -differential privacy for any .

Mechanisms for achieving both privacy notions typically add noise that scales with the sensitivity of the function being evaluated, which is the maximum change in the function’s value between two neighboring databases. For a real-valued function , this is formally defined as: .

The Gaussian mechanism with parameters takes in a function , database , and outputs . The scale of the noise is fully specified as given the privacy parameters and and the query sensitivity .

Theorem 5 (Privacy of Gaussian Mechanism [Dr14]).

The Gaussian Mechanism with parameter is -differentially private.

The AboveThresh algorithm [DNR09, DR14] is a DP mechanism for handling a sequence of queries arriving online. It takes in a potentially unbounded stream of queries, compares the answer of each query to a fixed noisy threshold, and halts when it finds a noisy answer that exceeds the noisy threshold (denoted as , and otherwise ), where the added noise follows the Laplace distribution. In many cases, more concentrated noise (e.g., Gaussian) is preferred, and [ZW20] gives the generalized version of GenAboveThresh (presented in Algorithm 1), using general noise-adding mechanisms and . These mechanisms can be any RDP algorithms that take in a real-valued input and produce a noisy estimate of the value. Our algorithm PrivSPRT will rely on an instantiation of GenAboveThresh using Gaussian mechanisms for differential privacy.

Input: database , stream of queries each with sensitivity , threshold , noise-adding mechanisms that each add noise to their real-valued input.
Let
for each query  do
     Let
     if  then
         Output
         Halt
     else
         Output
     end if
end for
Algorithm 1 Generalized Above Noisy Threshold: GenAboveThresh()
Theorem 6 (Privacy of GenAboveThresh [Zw20].).

Let be any private mechanism that satisfies -RDP for queries with sensitivity , and be any private mechanism that satisfies -RDP for queries with sensitivity . Let be a random variable indicating the stopping time of Algorithm 1 instantiated with (). Then Algorithm 1 (denotes by ) satisfies

(3)

and

(4)

for all and , where is the added noise from .

In the case where the expected length is bounded by , Theorem 6 implies an RDP bound of the form .

3 Private Sequential Hypothesis Testing

In this section, we present our main result, which is a differentially private algorithm for the sequential hypothesis testing problem that also has small expected sample size and low Type I and Type II errors. We present our PrivSPRT algorithm in Section 3.1 and the theoretical results on privacy, error rates, and sample size in Section 3.2.

3.1 PrivSPRT algorithm

We present our algorithm for private sequential hypothesis testing, PrivSPRT, given formally in Algorithm 2. The algorithm is a private version of SPRT, and it uses two parallel instantiations of GenAboveThresh to ensure privacy of the statistical decision. It instantiates two Gaussian mechanims with parameters and as the noise-adding mechanisms, and , respectively. At each time , the algorithm computes the log-likelihood ratio for , and uses the Gaussian mechanism to add noise to the log-likelihood ratio. It then compares this noisy statistic against two pre-fixed noisy thresholds that depend on the SPRT decision thresholds and , and the other Gaussian mechanism with parameter . The stopping condition of PrivSPRT is similar to that of SPRT, only using noisy versions of the thresholds. Once the stopping condition is reached, the algorithm stops collecting additional samples and outputs its statistical decision.

It is useful to highlight that we add noises to the cumulative log-likelihood ratio statistics, will allow us to maintain the first-order statistical optimality of our proposed algorithms. Here, the first-order optimality means the expected sample sizes of our algorithms subject to the privacy constraints converge to the classical optimal non-private expected sample size results up to . Meanwhile, we should mention that one could also add noises individual log-likelihood ratio statistics to satisfy the privacy constraints, but doing so will severely affect the expected sample sizes, and thus yield to algorithms that are suboptimal from the statistical efficiency viewpoint.

The sensitivity of the log-likelihood ratios is defined as: For certain distributions, including Gaussians, the sensitivity is unbounded and therefore would require infinite noise to preserve privacy. We instead use a truncation parameter to control the sensitivity of the log-likelihood ratio calculation, and add noise proportional to the post-truncation range. We note that the idea of truncating the likelihood for privacy also appears in [CKM19] for private simple hypotheses testing and [ZKT21] for private sequential change-point detection. The -truncated log-likelihood ratio is

where the truncation operation is defined as

Input: database , distributions , SPRT thresholds , Gaussian mechanisms with parameter and with parameters , truncation parameter
Let and
for each time  do
     Compute
     Let and
     if  then
         Halt and output (reject )
     else if  then
         Halt and output (accept )
     else
         Proceed to the next iteration
     end if
end for
Algorithm 2 Private Sequential Probability Ratio Test: PrivSPRT()

Comparing to standard AboveThresh.

One may wonder why GenAboveThresh is needed, and whether the original AboveThresh algorithm of [DNR09, DR14] with Laplace noise (as referred to as SparseVector) would be sufficient, perhaps with some loss in accuracy. In fact, this change to Laplace noise would break the desirable statistical properties of (non-private) SPRT. The properties of the SPRT depends on the overshoot of or

, and to maintain the first-order optimality on the expected sample size, controlling the second moments of the noisy statistics is necessary. Adding Laplace noise will make the variance too large, and thus the desirable properties will break down.

Empirically, we show in Section 4

that using Laplace noise instead of Gaussian noise results in undesirable performance. On the theoretical side, statistical analysis of the SPRT is traditionally based on renewal theory and overshoot analysis in applied probability, which both rely heavily on the central limit theorem (CLT), and thus the standard techniques are still applicable when adding Gaussian noise for privacy. On the other hand, if we add Laplace noise, the standard statistical techniques are inapplicable to characterize the overshoots; it remains an open problem to develop new tools to analyze the corresponding statistical properties.

3.2 Theoretical results on privacy, sample size, and error rates

In this subsection, we provide formal results on the privacy guarantees and statistical properties of PrivSPRT. For analyzing the expected sample size, we will relate and to the input parameters . Similarly for analyzing the error rates, we will relate the Type I and Type II error to and . Recall that these errors respectively correspond to the false positive and false negative rates of the algorithm, which can be respectively defined as and from PrivSPRT.

While our statistical properties of sample size and error rate are analyzed under the assumption that follow either or , as is standard in the statistics literature, our privacy guarantees hold unconditionally, regardless of the actual data distribution.

Privacy.

Privacy of PrivSPRT follows by composition of two parallel instantiation of Algorithm 1, one each for the upper and lower bounds on . Theorem 3 gives Renyi divergence bounds for the outputs on two neighboring databases for GenAboveThresh, but it only implies Renyi differential privacy when the conditional expectation of the stopping time or the moments of conditional expectation of the stopping time are bounded. [ZW20] shows that the stopping time of GenAboveThresh instantiated with Gaussian noise and non-negative queries has bounded moments of the conditional expectation of the stopping time, and thus it satisfies RDP. However, in our case, the log-likelihood ratio queries can be negative, and this result cannot be immediately applied in our setting. Therefore, to prove that PrivSPRT is private, we must show that the expectation of the stopping time is bounded. We remark that we can alternatively halt Algorithm 2 when reaches an upper bound, and then make a decision using hypothesis testing methods when the sample size is fixed. However, this approach requires new analysis for the sample size and error rates [Sie13]. The full proof of Theorem 7 appears in Appendix A.1.

Theorem 7 (Privacy).

Let and , where and . Then algorithm 2 satisfies -RDP, for any .

Corollary 1.

For and are chosen to be the parameters specified in the Gaussian mechanisms that satisfy -differential privacy, PrivSPRT in Algorithm 2 satisfies -RDP, for any .

Because we are using the Gaussian mechanism as the noise-adding mechanism, the dependence of the stopping time in the privacy guarantee is unavoidable. and in Theorem 7 are the moments of the conditional expectation of the stopping time, which depend on the true underlying distribution that generated the data; is roughly , and similarly is roughly . Theorem 7 and Corollary 1 further imply an -differential privacy bound for PrivSPRT by Theorem 4. For , and and are chosen to be the parameters specified in the Gaussian mechanisms that satisfy -differential privacy, PrivSPRT is -differentially private, with .

Sample Size.

When analyzing statistical properties of PrivSPRT, an important quantity is the expectation of the truncated individual log-likelihood ratios:

When goes to , the above expectations converge to the KL-divergence between and .

A technical challenge that arises in bounding the expected sample size is that the noisy log-likelihood ratio at time cannot be decomposed into a summation of i.i.d. random variables because of the noise terms. This preludes the use of Wald’s identity [Wal44], which is used in the proof of bounded sample size for non-private SPRT, and relates the expectation of a sum of randomly-many finite-mean, i.i.d. random variables to the expected number of terms in the sum and the expectation of the random variables.

Instead, we leverage a critical fact that for , and thus relate the expected sample size to the probability of the noisy truncated log-likelihood ratio being within the noisy thresholds at each time . Since the event is less probable for a large , we partition the range into several sub-intervals, and bound the probability in each sub-interval seperately. This results in our bound on the expected sample size in Theorem 8 when the noise parameters and go to . This result is consistent with the non-private sample size result , and it is first-order optimal. We note that a similar idea of partitioning the whole range into sub-intervals also appears in [LM20], where it was applied only for handling Gaussian data.

The last term in the bound of Theorem 8 is the additional cost that comes from adding Gaussian noise, which quantifies the cost of privacy. In the proof, we permit large values of the difference between the Gaussian noise to (or ) for a large , which reduces the additional expected sample size required for privacy. The analysis relies on partitioning the range into k intervals and a time-specific threshold depending on a constant , and the results are under the optimal choice of and . The proof is given in Appendix A.2.

Theorem 8 (Sample Size).

The expected sample size of PrivSPRT under satisfies where . Similarly, the expected sample size under satisfies where .

To interpret the results in Theorem 8, we choose a specific (potentially suboptimal) values of and . Choosing and gives , which is . The first term is the same as in the classical non-private results, and the second term is the additional cost for privacy. Since and will be chosen to scale with , the additional cost for privacy is , where is the expectation of the truncated log-likelihood ratios, which serves as a distance measure similar to the KL divergence. Our expected sample size and error rate results converge to the classical non-private results up to , ignoring the dependence on . The asymptotic dependence on is , which matches the sample complexity dependence on in the simpler problem of private simple hypothesis testing [CKM19].

Error rates.

We now move to provide guarantees for the Type I and Type II error rates of PrivSPRT. In the classical sequential hypothesis testing literature for non-private SPRT, the standard technique to characterize the error rates is based on the change of measure method that heavily utilizes the likelihood ratio statistics. Unfortunately, the test statistics of PrivSPRT are no longer the likelihood ratio, since the algorithm add Gaussian noise and truncates the log-likelihood for privacy. As a result, the standard change-of-measure technique is no longer applicable.

To characterize the error rates of PrivSPRT, we apply an alternative method based on the brute force estimation of the error probabilities, which was first proposed in [SK15] in the context of distributed hypothesis testing in sensor networks. It turns out that this alternative method is also applicable to the setting of PrivSPRT. The main idea is as follows: Type I error, , can be written as a sum of probabilities of the noisy log-likelihood ratio being above the noisy threshold at time and the event that the stopping time is for all : . We then partition the range of time into several sub-intervals and analyze them separately as before with the expected sample size. Although the high-level approach is similar to analyzing the expected sample size, the sub-intervals need to be carefully chosen here to give a meaningful bound for the error rates. The detailed proof is deferred to Appendix A.3.

Theorem 9 (Error Rate).

Let be the decision output by PrivSPRT. Then the Type I error is bounded by:

(5)

where , and , and . The Type II error is bounded by:

(6)

where , , and , and .

To interpret the results, in Theorem 9, choosing and gives . Again, the first term is the same as the non-private result . The additional term quantifies the cost of privacy. Since we are instantiating the Gaussian mechanisms with noise parameters and proportional to the sensitivity and the privacy parameter , the additional error term is reduced to . This implies the algorithm will incur a larger error rate for stronger privacy guarantees.

4 Numerical Results

In this section, we present results from Monte Carlo experiments designed to validate the theoretical results of PrivSPRT. We only need to validate the statistical properties of PrivSPRT— sample size and error rates — since the privacy guarantee holds even in the worst-case over databases and hypotheses. In Section 4.1

, we focus on sequentially testing means of Bernoulli distributions; in Appendix

4.2

, we provide additional empirical results on testing means of Gaussian distributions. In Appendix

4.3, we demonstrate empirically that the classic AboveThresh mechanism does not provide satisfactory performance in terms of sample size and error rates, thus justifying our algorithmic modifications made in PrivSPRT.

4.1 Testing on Bernoulli Data

In this section, our experiment focus on Bernoulli data, where are sampled i.i.d. from a Bernoulli distribution with parameter Monitoring Bernoulli data is one of the early research in the fully sequential design in clinical trials, see [Arm50]. For instance, one want to evaluate the effect of a new drug or treatment on the mortality rate of an unknown infectious disease such as COVID-19 in a sub-population of groups.

Here we consider two different scenarios that are simple yet useful to shed new lights on real-world applications. One is when the distance between the null hypothesis and the alternative hypothesis on is large, say, against e.g., the effect of a new treatment is expected to be significant to reduce the mortality rate among people whose age is 65 years or older in a developing country. The other is when the distance between the null hypothesis and the alternative hypothesis on is small, say, against , e.g., the effect of a drug to certain age group with certain diseases in a developed country. Since under this setting, the expected sample sizes under and are identical, and similarly, the Type I error and Type II error are also identical. For simplicity, we will use and error to denote the expected sample size and the error, respectively.

To obtain an accurate estimate of Type I and Type II errors, we use the importance sampling technique for the Monte Carlo simulations. This is because the estimate of the Type I error based on independent trials is where the sample is generated from has much smaller variance compared to the naive estimate where the sample is generated from .

We use two -differentially private Gaussian mechanisms as the noise-adding mechanisms in PrivSPRT, corresponding to and . Although the log-likelihood ratio is uniformly bounded for Bernoulli data, we invoke the truncation with parameter because and are linear with respect to for Bernoulli data, which makes the validation easier. For each simulation, we repeat the process for times. The results are presented in Figure 1, which plots the expected sample size against the log scale of , with varying the privacy parameter . From this figure, when we want to provide a stronger privacy, i.e., when becomes smaller, then we will have larger expected sample sizes for given Type I and Type II error probabilities constraints. This is consistent with our intuition on the tradeoff between privacy and statistical efficiency.

We also conduct experiments for testing against when and are not symmetric. We vary this truncation parameter in our experiments. For each fixed and , we choose thresholds and through Monte Carlo simulation to control the Type I error and Type II error at the same level ().The results of these simulations are presented in Table 1.

(a)
(b)
Figure 1: Three-way trade-off between privacy, expected sample size, and error rate. For large distance (left), we are testing against ; for small distance (right), we are testing against .
, error rates
0.05 0.5 8,7.5 139.662 172.89
1 4.3, 4.3 86.12 122.144
2 2.5, 2.5 61.683 88.307
0.2 0.5 32,32 139.456 195.504
1 16.8, 16.8 85.336 123.542
2 9.5,9.5 56.645 83.127
0.5 0.5 80,80 139.252 199.718
1 43, 43 88.137 127.986
2 25, 25 61.494 88.182
0.7 0.5 125,120 173.305 227.387
1 63,63 95.304 136.336
2 35,35 61.944 87.363
16, 16 29.607 28.318
Table 1: Numerical values of expected sample size under and , Type I error and Type II error for testing the Bernoulli parameter.

Table 1 shows three positive results. First, for each fixed privacy parameter , the expected sample sizes are almost the same across varying , and the thresholds are almost linear with respect to . This suggests that the expected sample size (resp. ) is proportional to or (resp. or ). The parameter controls a trade-off between how much information is lost from truncation in the log-likelihood ratios and how much noise is added for privacy. Thus expected sample sizes are larger for a larger with , as the additional noise starts to dominate the information provided by the log-likelihood ratios. Second, in our setting, the expectation of the truncated log-likehood ratio and . We see from Table 1 that is roughly for all the cases, which further validates Theorem 8 that (resp. ) is (resp. ). Third, for each fixed , (resp. ) decreases as increases for weaker privacy, which is consistent with Theorem 8, because the additional cost does not involve the threshold, and it decreases for weaker privacy.

4.2 Testing on Gaussian Data

In this section, our experiments focus on testing means of Gaussian data, where are sampled i.i.d. from a Gaussian distribution with mean . We again consider two different scenarios: large distance between the null and alternative hypotheses on corresponding to against , and a small distance between the null and alternative hypotheses on corresponding to against . We will denote the expected sample size as since and are identical for Gaussian data, and similarly, we denote the Type I error and Type II error as errors. We again use two -differentially private Gaussian mechanisms as the noise-adding mechanisms.

In Figure 2, we plot the expected sample size against the log scale of error, and we vary the privacy parameter . This experiment is conducted under the setting where we fix the truncation threshold and vary the decision threshold . For each simulation, we repeat the process times and report average performance. As in the case with Bernoulli data, we see that we experience a larger expected sample size for a given Type I and Type II error constraint as decreases. Additionally, we need fewer samples to distinguish for a large distance regime (Left vs. Right, note the different scales on the y-axes).

(a)
(b)
Figure 2: Three-way trade-off between privacy, expected sample size, and error rate. For large distance (left), we are testing against ; for small distance (right), we are testing against .

We again conduct experiments to further validate our theoretical results empirically in the Gaussian setting. In Table 2 (left), we vary the truncation parameter and the privacy parameter . For each fixed and , we choose thresholds through Monte Carlo simulation with importance sampling to control Type I and Type II errors at the level. Similar to the results for testing the Bernoulli parameter, the thresholds are almost linear with respect to . The expected sample sizes and are almost the same for all , and and increase for a larger , because the noise added for privacy dominates the information provided by the log-likelihood ratios. This suggests that a relatively small is preferred, and as long as is not too large, it has little impact on the performance. Moreover, we observe that (resp. ) decreases as increases for weaker privacy, as the additional cost term in Theorem 8 decreases for less noise.

4.3 Using the standard AboveThresh.

To compare against the performance of our PrivSPRT, we also conduct experiments for testing means of Gaussian data using the original AboveThresh algorithm with Laplace noise that satisfies -differential privacy. We now vary the truncation parameter , and choose the thresholds such that the Type I and Type II error are below . The results are presented in Table 2 (right). Table 2 shows that using the original AboveThresh algorithm with Laplace noise results in much larger expected sample sizes, given that the Type I and Type II errors are fixed at the level. We note that although the overall privacy cost for PrivSPRT is slightly larger, PrivSPRT provides a better trade-off between privacy and accuracy.

We also empirically study the overshoot property when adding Laplace noise. We again consider testing against for Bernoulli data. We choose this setting because , to have a comprehensive view of and , and the Type I and Type II errors. We now fix the truncation parameter , and vary the privacy parameter and the thresholds . The results are presented in Table 3.

PrivSPRTwith Gaussian noise AboveThresh with Laplace noise
error rates error rates
0.5 0.5 9 0.05 12.547 0.5 0.5 28 0.05 22.821
1 4 7.298 1 12 12.621
2 2.1 4.890 2 6.5 10.482
1 0.5 18 12.485 1 0.5 59 26.032
1 8.2 7.367 1 29 17.165
2 4 4.792 2 15 13.291
2 0.5 36 13.333 2 0.5 112 23.581
1 16 7.460 1 52 15.438
2 8 5.000 2 28 12.564
5 0.5 90 16.943 5 0.5 270 24.426
1 40 10.156 1 140 21.067
2 98 6.190 2 70 15.872
2 1.793
Table 2: Numerical values of expected sample size , error rates for testing the Gaussian mean using PrivSPRT(left) and the original AboveThresh with Laplace noise (right). The thresholds are chosen to control Type I and Type II error at (within the Monte Carlo simulation errors).

On the theoretical side, we should expect the expected sample size to be for non-private SPRT. However, we see from Table 3 that the expected sample sizes are nonlinear with respect to the thresholds for strong privacy (), which is no longer consistent with the CLT theorem for non-private SPRT. In contrast, we observe from Table 1 that (resp. ) is (resp. ) in Section 4 when adding Gaussian noise in PrivSPRT. Intuitively, it appears that the overshoot analysis when adding Laplace noise relies heavily on the additional noise, rather than the statistical information provided by log-likelihood ratios. Characterizing the relevant statistical properties when adding Laplace noise requires new tools, which we leave as future work for the privacy and statistics communities.

Type I Type II
10 0.5 0.3634 0.3562 3.537 3.762
1 0.2184 0.3132 9.246 10.399
2 0.0181 0.2185 22.317 29.035
20 0.5 0.2577 0.1750 11.824 11.62
1 0.0235 0.0140 35.353 43.121
2 1.03e-05 3.53e-05 55.136 77.450
40 0.5 0.0164 0.0266 51.257 66.529
1 8.11e-08 2.4e-04 99.026 144.114
2 1.04e-20 3.79e-19 121.27 179.172
Table 3: Numerical values of expected sample sizes and , Type I error and Type II error for testing Bernoulli parameter using the original AboveThresh algorithm with Laplace noise.

References

  • [Arm50] Peter Armitage. Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. Journal of the Royal Statistical Society. Series B (Methodological), 12(1):137–144, 1950.
  • [Arm54] Peter Armitage. Sequential tests in prophylactic and therapeutic trials. Quarterly Journal of Medicine, 23(91):255–274, 1954.
  • [CKLZ20] Rachel Cummings, Sara Krehbiel, Yuliia Lut, and Wanrong Zhang. Privately detecting changes in unknown distributions. In Proceedings of the 37th International Conference on Machine Learning, pages 958–968, 2020.
  • [CKM18] Rachel Cummings, Sara Krehbiel, Yajun Mei, Rui Tuo, and Wanrong Zhang. Differentially private change-point detection. In Advances in Neural Information Processing Systems, pages 10825–10834, 2018.
  • [CKM19] Clément L Canonne, Gautam Kamath, Audra McMillan, Adam Smith, and Jonathan Ullman. The structure of optimal private tests for simple hypotheses. In

    Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing

    , pages 310–321, 2019.
  • [CKS19] Simon Couch, Zeki Kazan, Kaiyan Shi, Andrew Bray, and Adam Groce. Differentially private nonparametric hypothesis testing. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 737–751, 2019.
  • [DeM98] David L. DeMets. Sequential designs in clinical trials. Cardiac Electrophysiology Review, 2(1):57–60, 1998.
  • [Dif17] Differential Privacy Team, Apple. Learning with privacy at scale. https://machinelearning.apple.com/docs/learning-with-privacy-at-scale/appledifferentialprivacysystem.pdf, December 2017.
  • [DKY17] Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. In Advances in Neural Information Processing Systems 30, NIPS ’17, pages 3571–3580. Curran Associates, Inc., 2017.
  • [DLS17] Aref N. Dajani, Amy D. Lauger, Phyllis E. Singer, Daniel Kifer, Jerome P. Reiter, Ashwin Machanavajjhala, Simson L. Garfinkel, Scot A. Dahl, Matthew Graham, Vishesh Karwa, Hang Kim, Philip Lelerc, Ian M. Schmutte, William N. Sexton, Lars Vilhuber, and John M. Abowd. The modernization of statistical disclosure limitation at the U.S. census bureau, 2017. Presented at the September 2017 meeting of the Census Scientific Advisory Committee.
  • [DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Conference on Theory of Cryptography, TCC ’06, pages 265–284, 2006.
  • [DNR09] Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum, and Salil P. Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In Proceedings of the 41st ACM Symposium on Theory of Computing, STOC ’09, pages 381–390, 2009.
  • [DR14] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014.
  • [EPK14] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM Conference on Computer and Communications Security, CCS ’14, pages 1054–1067, New York, NY, USA, 2014. ACM.
  • [GLRV16] Marco Gaboardi, Hyun Lim, Ryan Rogers, and Salil Vadhan. Differentially private chi-squared hypothesis testing: Goodness of fit and independence testing. In International conference on machine learning, pages 2111–2120. PMLR, 2016.
  • [GR18] Marco Gaboardi and Ryan Rogers. Local private hypothesis testing: Chi-square tests. In International Conference on Machine Learning, pages 1626–1635. PMLR, 2018.
  • [GS91] Bhaskar Kumar Ghosh and Pranab Kumar Sen. Handbook of sequential analysis. CRC Press, 1991.
  • [Har20] Frank E. Harrell. Sequential bayesian designs for rapid learning in COVID-19 clinical trials. URL https://www.fharrell.com/talk/seqbayes/ Last checked 10/15/21, 2020.
  • [LM20] Kun Liu and Yajun Mei. Improved performance properties of the cisprt algorithm for distributed sequential detection. Signal Processing, 172:107573, 2020.
  • [Mir17] Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pages 263–275. IEEE, 2017.
  • [She18] Or Sheffet. Locally private hypothesis testing. In International Conference on Machine Learning, pages 4605–4614. PMLR, 2018.
  • [Sie13] David Siegmund.

    Sequential analysis: tests and confidence intervals

    .
    Springer Science & Business Media, 2013.
  • [SK15] Anit Kumar Sahu and Soummya Kar. Distributed sequential detection for gaussian shift-in-mean hypothesis testing. IEEE Transactions on Signal Processing, 64(1):89–103, 2015.
  • [Wal44] Abraham Wald. On cumulative sums of random variables. The Annals of Mathematical Statistics, 15(3):283–296, 1944.
  • [Wal45] Abraham Wald. Sequential tests of statistical hypotheses. The annals of mathematical statistics, 16(2):117–186, 1945.
  • [Wal04] Abraham Wald. Sequential analysis. Courier Corporation, 2004.
  • [WBK19] Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. Subsampled renyi differential privacy and analytical moments accountant. In Kamalika Chaudhuri and Masashi Sugiyama, editors,

    Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics

    , volume 89 of Proceedings of Machine Learning Research, pages 1226–1235. PMLR, 2019.
  • [Whi97] John Whitehead. The Design and Analysis of Sequential Clinical Trials. Wiley, New York, revised second edition edition, 1997.
  • [WSMD20] Yu Wang, Hussein Sibai, Sayan Mitra, and Geir E Dullerud. Differential privacy for sequential algorithms. arXiv preprint arXiv:2004.00275, 2020.
  • [WW48] A. Wald and J. Wolfowitz. Optimum Character of the Sequential Probability Ratio Test. The Annals of Mathematical Statistics, 19(3):326 – 339, 1948.
  • [ZKT21] Wanrong Zhang, Sara Krehbiel, Rui Tuo, Yajun Mei, and Rachel Cummings. Single and multiple change-point detection with differential privacy. Journal of Machine Learning Research, 22(29):1–36, 2021.
  • [ZW20] Yuqing Zhu and Yu-Xiang Wang.

    Improving sparse vector technique with renyi differential privacy.

    Advances in Neural Information Processing Systems, 33, 2020.

Appendix A Omitted Proofs

In this appendix, we provide proofs for our main theorems, which were omitted in the main body of the paper. We restate the theorems here for convenience.

a.1 Proof of privacy

See 7

Proof.

We first show that the expectation of the stopping time is bounded given and . We instead show the equivalent fact that for . Define a constant . If , then for any positive integer , the following inequalities must hold:

(7)

where . We can further express as a summation of independent Gaussians , and then (7) is equivalent to

(8)

To prove for , it is sufficient to show that the probability is zero that (8) holds for all integer values of . Since the variance of is not zero, and it is bounded below by the variance of , the expected value of converges to as goes to . Therefore, there exists a positive integer such that