# Detection of Sparse Positive Dependence

In a bivariate setting, we consider the problem of detecting a sparse contamination or mixture component, where the effect manifests itself as a positive dependence between the variables, which are otherwise independent in the main component. We first look at this problem in the context of a normal mixture model. In essence, the situation reduces to a univariate setting where the effect is a decrease in variance. In particular, a higher criticism test based on the pairwise differences is shown to achieve the detection boundary defined by the (oracle) likelihood ratio test. We then turn to a Gaussian copula model where the marginal distributions are unknown. Standard invariance considerations lead us to consider rank tests. In fact, a higher criticism test based on the pairwise rank differences achieves the detection boundary in the normal mixture model, although not in the very sparse regime. We do not know of any rank test that has any power in that regime.

## Authors

• 29 publications
• 9 publications
• ### Signal detection via Phi-divergences for general mixtures

In this paper we are interested in testing whether there are any signals...
03/17/2018 ∙ by Marc Ditzhaus, et al. ∙ 0

• ### Detecting Sparse Heterogeneous Mixtures in a Two-Sample Problem

We consider the problem of detecting sparse heterogeneous mixtures in a ...
11/26/2020 ∙ by Rong Huang, et al. ∙ 0

• ### Detection of Sparse Mixtures: Higher Criticism and Scan Statistic

We consider the problem of detecting a sparse mixture as studied by Ings...
02/23/2018 ∙ by Ery Arias-Castro, et al. ∙ 0

• ### The Sparse Variance Contamination Model

We consider a Gaussian contamination (i.e., mixture) model where the con...
07/27/2018 ∙ by Ery Arias-Castro, et al. ∙ 0

• ### Gaussian approximation of Gaussian scale mixture

For a given positive random variable V>0 and a given Z∼ N(0,1) independe...
10/04/2018 ∙ by Gérard Letac, et al. ∙ 0

• ### The Bayesian Low-Rank Determinantal Point Process Mixture Model

Determinantal point processes (DPPs) are an elegant model for encoding p...
08/15/2016 ∙ by Mike Gartrell, et al. ∙ 0

• ### Statistical Limits of Sparse Mixture Detection

We consider the problem of detecting a general sparse mixture and obtain...
04/06/2021 ∙ by Subhodh Kotekal, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The detection of rare effects has been an important problem for years in settings, and may be particularly relevant today, for example, with the search for personalized care in the health industry, where a small fraction of a population may respond particularly well, or particularly poorly, to some given treatment [18].

Following a theoretical investigation initiated in large part by Ingster [14] and broadened by Donoho and Jin [8], we are interested in studying two-component mixture models, also known as contamination models, in various asymptotic regimes defined by how the small mixture weight converges to zero. Most of the existing work in the setting of univariate data has focused on models where the contamination manifests itself as a shift in mean [10, 9, 12, 6, 17] with a few exceptions where the effect is a change in variance [2], or a change in both mean and variance [7].

In the present paper, we are interested in bivariate data, instead, and more specifically in a situation where the effect felt in the dependence between the two variables being measured. This setting has been recently considered in the literature in the context of assessing the reproducibility of studies. For example, Li et al. [16]

aimed to identify significant features from separate studies using an expectation-maximization (EM) algorithm. They applied a copula mixture model and assumed that changes in the mean and covariance matrix differentiate the contaminated component from the null component.

Zhao et al. [21] studied another model where variables from the contamination are stochastically larger marginally. In both models, the marginal distributions have some non-null effects. Similar settings have been considered within a multiple testing framework [5, 20].

While existing work has focused on models motivated by questions of reproducibility, in the present work we come back to basics and directly address the problem of detecting a bivariate mixture with a component where the variables are independent and a component where the variables are positively dependent.

### 1.1 Gaussian Mixture Model

Ingster [14] and Donoho and Jin [8] started with a mixture of Gaussians, and we do the same, and in our setting, this means we consider the following mixture model

 (X,Y)∼(1−ε)N(0,I)+εN(0,Σρ),Σρ:=(1ρρ1), (1)

where is the contamination proportion and is the correlation between the two variables under contamination. We consider the following hypothesis testing problem: based on drawn iid from (1), decide

 H0:ε=0versusH1:ε>0,ρ>0. (2)

Note that under the null hypothesis,

is from the bivariate standard normal. Under the alternative, and remain standard normal marginally. Following the literature on the detection of sparse mixtures [14, 8], we are most interested in a situation, asymptotic as , where , and the central question is how large needs to be in order to reliability distinguish these hypotheses.

The formulation (1) suggests that the alternative hypothesis is composite, but if we assume that are known under the alternative, then the likelihood ratio test (LRT) is optimal by Neyman-Pearson lemma. We start with characterizing the behavior of the LRT, which provides a benchmark. We then study some other testing procedures that do not require knowledge of the model parameters:111 Such procedures are said to be adaptive.

• The covariance test rejects for large values of , and coincides with Rao’s score test in the present context. This is the classical test for independence, specifically designed for the case where and under the alternative. We shall see that it is suboptimal in some regimes.

• The extremes test rejects for small values of . This test exploits the fact that, because is assumed positive, the variables in the contaminated component are closer to each other than in the null component.

• The higher criticism test was suggested by John Tukey and deployed by Donoho and Jin [8] for the testing of sparse mixtures. We propose a version of that test based on the pairwise differences, . In detail, the test rejects for large values of

 supu≥0√n(Fn(u)−Ψ(u))√Ψ(u)(1−Ψ(u)), (3)

where , with

denotes the standard normal distribution function, and

, the empirical distribution function of .

As is common practice in this line of work [14, 8], under we set

 ε=n−β,β∈(0,1) fixed. (4)

The setting where is often called the dense regime and the setting where is often called the sparse regime. Our analysis reveals the following:

1. Dense regime. The dense regime is most interesting when . In that case, we find that the covariance test and the higher criticism test match the asymptotic performance of the likelihood ratio test to first-order, while the extremes test has no power.

2. Sparse regime. The sparse regime is most interesting when . In that case, we find that the higher criticism test still performs as well as the likelihood ratio test to first order, while the covariance test is powerless, and the extremes test is suboptimal.

### 1.2 Gaussian Mixture Copula Model

From a practical point of view, the assumption that both and are normally distributed is quite stringent. Hence, we would like to know if there are nonparametric procedures that do not require such a condition but can still achieve the same performance as the likelihood ratio test. In the univariate setting where the effect arises as a shift in mean, this was investigated in [3]. In the bivariate setting, in a model for reproducibility, Zhao et al. [21] proposed a nonparametric test based on a weighted version of Hoeffding’s test for independence.

Here, instead of model (1), we suppose follows a Gaussian mixture copula model (GMCM) [4]

, meaning that there is a latent random vector

such that

 F(X)=Φ(Z1),G(Y)=Φ(Z2),(Z1,Z2)∼(1−ε)N(0,I)+εN(0,Σρ),Σρ:=(1ρρ1), (5)

where and are unknown distribution functions on the real line, and is the standard normal distribution function, while is the contamination proportion and is the correlation between and in the contaminated component, as before in model (1). Li et al. [16] also used a copula mixture model, but they placed emphasis on the mean while we focus on the dependence.

We still consider the testing problem (2), but now in the context of Model (5). The setting is nonparametric in that both and are unknown. Model (5) is crafted in such a way that the marginal distributions of and contain absolutely no information that is pertinent to the testing problem under consideration. Figure 1 provides an illustration.

The model is also attractive because of an invariance under all increasing marginal transformations of the variables. This is the same invariance that leads to considering rank based methods such as the Spearman correlation test [15, Chp 6]. In fact, we analyze the Spearman correlation test, which is the nonparametric analog to the covariance test, showing that it is first-order asymptotically optimal in the dense regime. We also propose and analyze a nonparametric version of the higher criticism based on ranks which we show is first-order asymptotically optimal in the moderately sparse regime where . In the very sparse regime, where , we do not know of any rank-based test that has any power.

## 2 Gaussian Mixture Model

In this section, we focus on the Gaussian mixture model (

1). We start by deriving a lower bound on the performance of the likelihood ratio test, which provides a benchmark for the other (adaptive) tests, which we subsequently analyze.

We distinguish between the dense and sparse regimes:

 dense regime ρ=n−γ,γ>0 fixed; (6) sparse regime ρ=1−n−γ,γ>0 fixed. (7)

We say that a testing procedure is asymptotically powerful (resp. powerless) if the sum of its probabilities of Type I and Type II errors (its risk) has limit 0 (resp. limit inferior at least 1) in the large sample asymptote.

### 2.1 The likelihood ratio test

###### Theorem 1.

Consider the testing problem (2) with parameterized as in (4). In the dense regime, with parameterized as in (6), the likelihood ratio test is asymptotically powerless when . In the sparse regime, with parameterized as in (7), the likelihood ratio test is asymptotically powerless when .

This only provides a lower bound on what can be achieved, but it will turn out that to be sharp once we establish the performance of the higher criticism test in Proposition 2 below.

###### Proof.

The proof techniques are standard and already present in [10, 14], and many of the subsequent works.

Defining and , the model (1) is equivalently expressed in terms of , which has distribution

 (U,V)∼(1−ε)N(0,I)+εN(0,Δρ),Δρ:=diag(1−ρ,1+ρ). (8)

Note that and are independent only conditional on knowing what distribution they were sampled from. In terms of the ’s, the likelihood ratio is

 L:=n∏i=1Li, (9)

where is the likelihood ratio for observation , which in the present case takes the following expression

 Li =1−ε2πexp(−12U2i−12V2i)+ε2π√1−ρ2exp(−12(1−ρ)U2i−12(1+ρ)V2i)12πexp(−12U2i−12V2i) (10) =1−ε+ε(1−ρ2)−1/2exp(−ρ2(1−ρ)U2i+ρ2(1+ρ)V2i). (11)

The risk of the likelihood ratio test is equal to

 risk(L):=1−12E0|L−1|. (12)

We show that under each of the stated conditions. We consider each regime in turn.

Dense regime.

It turns out that it suffices to bound the second moment. Indeed, using the Cauchy-Schwarz inequality, we have

 risk(L)≥1−12√E0[L2]−1, (13)

reducing the task to showing that . We have

 E0[L2]=n∏i=1E0[L2i]=(E0[L21])n (14)

where

 E0[L21] =E0[(1−ε+ε(1−ρ2)−1/2exp(−ρ2(1−ρ)U21+ρ2(1+ρ)V21))2] (15) =(1−ε)2+2(1−ε)ε+ε2(1−ρ2)−1E0[exp(−ρ(1−ρ)U21)]E0[exp(ρ(1+ρ)V21)] (16) (17)

For the third term, we have

 (18)

and

 (19)

Hence, we have

 E0[L21]=1+ε2ρ2/(1−ρ2), (20)

and, therefore,

 E0[L2]=[1+ε2ρ2/(1−ρ2)]n≤exp[nε2ρ2/(1−ρ2)], (21)

so that when

 nε2ρ2=o(1), (22)

since is assumed to be bounded away from 1. Under the specified parameterization, this happens exactly when .

Sparse regime. It turns out that simply bounding the second moment, as we did above, does not suffice. Instead, we truncate the likelihood and study the behavior of its first two moments. Define the indicator variable and the corresponding truncated likelihood ratio

 ¯L=n∏i=1¯Li,¯Li:=LiDi. (23)

Using the triangle inequality, the fact that , and the Cauchy-Schwarz inequality, we have the following upper bound:

 E0|L−1| ≤E0|¯L−1|+E0(L−¯L) (24) ≤[E0[¯L2]−1+2(1−E0[¯L])]1/2+(1−E0[¯L]) , (25)

so that when and .

For the first moment, we have

 E0[¯L]=n∏i=1E0[¯Li]=(E0[¯L1])n (26)

where, using the independence of and , and taking the expectation with respect to first,

 E0[¯L1] =E0[(1−ε+ε(1+ρ)−1/2exp(ρ2(1+ρ)V21))D1] (27) =(1−ε)Ψ(√2logn)+εΨ(√2logn/√1+ρ) (28) =(1−ε)(1−O(n−1/√logn))+ε(1−O(n−1/(1+ρ)/√logn)) (29) =1−o(1/n)−o(εn−1/(1+ρ)), (30)

where, for ,

 Ψ(t)=P(|N(0,1)|≤t)=2Φ(t)−1=∫t−te−s2/2√2πds, (31)

and we used the fact that when . Since with in the sparse regime, for sufficiently close to 1, , in which case . This yields

 E0[¯L]≥(1−o(1/n))n=1−o(1). (32)

For the second moment, we have

 E0[¯L2]=n∏i=1E0[¯L2i]=E0[¯L21]n, (33)

where

 E0[¯L21] =E0[(1−ε+ε(1−ρ2)−1/2exp(−ρ2(1−ρ)U21+ρ2(1+ρ)V21))2D1] (34) =(1−ε)2Ψ(√2logn)+2(1−ε)εΨ(√2logn/√1+ρ) (35) +ε2(1−ρ2)−1E0[exp(−ρ(1−ρ)U21)]E0[exp(ρ(1+ρ)V21)D1]. (36)

The sum of first two terms is bounded from above by . For the third term, we have

 E0[exp(−ρ(1−ρ)U21)]=1√2π∫∞−∞e−ρ1−ρu2−12u2du=√1−ρ1+ρ, (37)

and

 E0[exp(ρ(1+ρ)V21)D1]=1√2π∫√2logn−√2logneρ1+ρv2−12v2dv≤1√2π2√2logn, (38)

using the fact that . Hence,

 E0[¯L21] ≤1−ε2+ε2(1−ρ2)−1√1−ρ1+ρ1√2π2√2logn (39) ≤1+ε2(1−ρ)−1/2(logn)1/2, (40)

when is sufficiently close to 1. This in turn yields the following bound

 E0[¯L2]≤[1+ε2(1−ρ)−1/2(logn)1/2]n≤exp[nε2(1−ρ)−1/2(logn)1/2], (41)

so that when

 nε2(1−ρ)−1/2(logn)1/2=o(1). (42)

Under the specified parameterization, this happens exactly when . ∎

In the dense regime, with parameterized as in (6), we say that a test achieves the detection boundary if it is asymptotically powerful when , and in the sparse regime, with parameterized as in (7), we say that a test achieves the detection boundary if it is asymptotically powerful when .

### 2.2 The covariance test

Recall that the covariance test rejects for large values of , calibrated under the null where are iid standard normal.

###### Proposition 1.

For the testing problem (2), the covariance test achieves the detection boundary in the dense regime, while it is asymptotically powerless in the sparse regime.

###### Proof.

We divide the proof into the two regimes.

Dense regime. Under , we have

 E0(Tn) =nE0(X1Y1)=nE0(X1)E0(Y1)=0, (43) Var0(Tn) =nVar0(X1Y1)=nE0(X21)E0(Y21)=n, (44)

so that, by Chebyshev’s inequality,

 P0(|Tn|≥an√n)→0, (45)

for any sequence diverging to infinity.

Under , we have

 E1(Tn) =nE1(X1Y1)=nερ, (46) Var1(Tn) =nVar1(X1Y1)=n(1+2ερ2−ε2ρ2)≤3n, (47)

so that, by Chebyshev’s inequality,

 P1(|Tn−nερ|≥an√n)→0. (48)

Thus the test with rejection region is asymptotically powerful when

 √nερ≥2an. (49)

If we choose , for example, and is parameterized as in (6), this happens for large enough when .

Sparse regime. To prove that the covariance test is asymptotically powerless when , we show that, under , converges to the same limiting distribution as under .

Under

, by the central limit theorem,

 Tn√n⇀N(0,1). (50)

Under the distribution of the ’s (which remain iid) depends on , but the condition for applying Lyapunov’s central limit theorem are satisfied since

 E1[(XiYi−ερ)4]≤8(E1[(XiYi)4]+(ερ)4), (51)

with and

 E1[(XiYi)4]≤[E1(X8i)E1(Y8i)]1/2=E(Z8)=const, (52)

where and the inequality is Cauchy-Schwarz’s, while

 Var1(XiYi)=1+2ερ2−ε2ρ2≥1, (53)

so that the test statistic still converges weakly to a normal distribution,

 Tn−E1(Tn)√Var1(Tn)⇀N(0,1). (54)

In the present regime, we have

 E1(Tn)=nερ,Var1(Tn)=n(1+2ερ2−ε2ρ2), (55)

so that and , and thus we conclude by Slutsky’s theorem that . ∎

###### Remark 1.

There are good reasons to consider the covariance test in this specific form since the means and variances are known. It is worth pointing out that the Pearson correlation test, which is more standard in practice since it does not require knowledge of the means or variances, has the same asymptotic power properties.

### 2.3 The higher criticism test and the extremes test

Define , and note that

 U1,…,Uniid∼(1−ε)N(0,1)+εN(0,1−ρ). (56)

Seen through the ’s, the problem becomes that of detecting a sparse contamination where the effect is in the variance. We recently studied this problem in detail [2], extending previous work by Cai et al. [7], who considered a setting where the effect is both in the mean and variance. Borrowing from our prior work, we consider a higher criticism test, already defined in (3), and an extremes test, which rejects for small values of .

###### Proposition 2.

For the testing problem (2), the higher criticism test achieves the detection boundary in the dense and sparse regimes.

###### Proof.

Set , which is the variance of the contaminated component. In our prior work [2, Prop 3], we showed that the higher criticism test as defined in (3) is asymptotically powerful when

1. with fixed such that ;

2. with fixed such that .

This can be directly translated into the present setting, yielding the stated result. ∎

###### Proposition 3.

For the testing problem (2), the extremes test is asymptotically powerless when is bounded away from 1, while when parameterized as in (4) and parameterized as in (7), it is asymptotically powerful when , and asymptotically powerless when .

###### Proof.

This is also a direct corollary from our prior work our prior work [2, Prop 2]. ∎

Thus the extremes test is grossly suboptimal in the dense regime, while it is suboptimal in the the sparse regime due to the fact that .

###### Remark 2.

The higher criticism and extremes tests are both based on the ’s. This was convenient as it reduced the problem of testing for independence to the problem of testing for a change in variance (both in a contamination model). However, reducing the original data, meaning the ’s, to the ’s implies a loss of information. Indeed, a lossless reduction would be from the ’s to the ’s, where

, with joint distribution given in (

8). It just turns out that ignoring the ’s does not lead to any loss in first-order asymptotic power.

### 2.4 Numerical experiments

We performed some numerical experiments to investigate the finite sample performance of the tests considered here: the likelihood ratio test, the Pearson correlation test (instead of the covariance test from a practical point of view), the extremes test, the higher criticism test, and also a plug-in version of the higher criticism test where the parameters of the bivariate normal distribution (the two means and two variances) are estimated under the null. The sample size

is set large to in order to capture the large-sample behavior of these tests. We tried four sparsity levels, setting . The p-values for each test are computed as follows:

1. For the likelihood ratio test, the p-values are estimated based on permutations.

2. For the higher criticism test and the plug-in higher criticism test, the p-values are estimated based on 200 permutations.

3. For the extremes test, we used the exact null distribution, which is available in a closed form.

4. For the Pearson correlation test, the p-values are from the limiting distribution.

For each scenario, we repeated the process 200 times and calculated the fraction of p-values smaller than 0.05, representing the empirical power at the 0.05 level.

The results of this experiment are reported in Figure 2 and are broadly consistent with the theory developed earlier in this section. Though we show that the higher criticism test is first-order comparable to the likelihood ratio test in the dense regime, even with a large sample, its power is much lower. The Pearson correlation test does better in that regime. The plug-in higher criticism test has a similar performance as the higher criticism test in the dense regime, while it loses some power in the moderately sparse regime, and is powerless in the very sparse regime.

## 3 Gaussian Mixture Copula Model

In this section we turn to the Gaussian mixture copula model introduced in (5). The setting is thus nonparametric, since the marginal distributions are completely unknown, and standard invariance considerations [15, Ch 6] lead us to consider test procedures that are based on the ranks. For this, we let denote the rank of among , and similarly, we let denote the rank of among . (The ranks are in increasing order, say.)

Although not strictly necessary, we will assume that and in (5) are strictly increasing and continuous. In that case, the ranks are invariant with respect to transformations of the form with and strictly increasing on the real line. In particular, for the rank tests that follow, this allows us to reduce their analysis under (5) to their analysis under (1).

### 3.1 The covariance rank test

The covariance rank test is the analog of the covariance test of Section 2.2. It rejects for large values of (redefined). As is well-known, this is equivalent to rejecting for large values of the Spearman rank correlation.

###### Proposition 4.

For the testing problem (2) under the model (5), the covariance rank test achieves the detection boundary in the dense regime.

We anticipate that the covariance rank test is asymptotically powerless in the sparse regime, although we do not formally prove that.

###### Proof.

We start by considering the null hypothesis . From [11, Eq 3.11-3.12, Ch 11], we have

 E0(Tn) =n(n+1)2/4=n3/4+O(n2), (57) Var0(Tn) =n2(n−1)(n+1)2/144≍n5, (58)

so that, using Chebyshev’s inequality,

 P0(Tn≥n3/4+ann5/2)→0, (59)

for any sequence diverging to infinity.

We now turn to the alternative hypothesis . For convenience, we assume that the ranks run from to . This does not change the test procedure since , but makes the derivations somewhat less cumbersome. In particular, we have

 Ri =n∑j=1Aij,Aij:=I{Xi>Xj}, (60) Si =n∑j=1Bij,Bij:=I{Yi>Yj}, (61)

so that

 Tn=n∑i=1n∑j=1n∑k=1AijBik. (62)

For the expectation, we have

 E1(Tn) =n(n−1)(n−2)E1[A12B13]+O(n2) (63) =n3E1[A12B13]+O(n2). (64)

The expectation is with respect to independent, with drawn from the mixture (1), and and standard normal. Let and , so that . We note that is bivariate normal with standard marginals. Moreover, when comes from the main component, and are uncorrelated, and therefore independent; while when comes from the contaminated component, and have correlation . Therefore,

 E1[A12B13] =(1−ε)Λ(0)+εΛ(ρ/2), (65)

where under . We immediately have , and in general,222 This identity is well-known, and not hard to prove (https://math.stackexchange.com/questions/255368/getting-px0-y0-for-a-bivariate-distribution). It also appears, for example, in [19, Lem 1].

 Λ(ρ)=14+12πsin−1(ρ). (66)

We conclude that, as and ,

 E1(Tn) =n3[14+12πεsin−1(ρ/2)]+O(n2) (67) =14n3+14πn3ερ+O(n2). (68)

 E1(T2n) =n(n−1)⋯(n−5)E1[A12B13A45B46]+O(n5) (69) =n6E1[A12B13A45B46]+O(n5), (70)

which then implies that

 Var1(Tn) =n6E1[A12B13A45B46]+O(n5)−[n3E1[A12B13]+O(n2)]2 (71) =O(n5), (72)

the same bound we had for . Thus, by Chebyshev’s inequality, we have

 P1(Tn≤n3/4+n3ερ/4π−ann5/2)→0, (73)

for any sequence diverging to infinity.

We consider the test with rejection region . Our analysis implies that this test is asymptotically powerful when

 n3ερ/4π≥2ann5/2, (74)

If we choose , for example, and is parameterized as in (6), this happens for large enough when . ∎

### 3.2 The higher criticism rank test

The analog of the higher criticism test of (3) is a higher criticism based on the the pairwise differences in ranks, . To be specific, we define

 HCrank=max0≤t≤n/2∑ni=1I{Di≤t}−nu(t)√nu(t)(1−u(t)), (75)

where is the probability , which can be expressed in closed form as

 u(t)=n2−(n−t)(n−t−1)n2=n(2t+1)−t(t+1)n2. (76)

Note that in this definition the denominator is only an approximation to the standard deviation of the numerator. The standard deviation has a closed-form expression, known since the work of

Hoeffding [13, Th 2], but it is cumbersome and relatively costly to compute (although its computation is only done once for each ). Also, there is a fair amount of flexibility in the choice of range of thresholds considered. This particular choice seems to work well enough. As any other rank test, it is calibrated by permutation (or Monte Carlo if there are no ties in the data).

###### Theorem 2.

For the testing problem (2) under the model (5), the higher criticism rank test achieves the detection boundary in the moderately sparse regime.

###### Proof.

We start with the situation under the null hypothesis , where we show that is of order at most based on the concentration inequality for randomly permuted sums. Fixing critical value , define

 ai,j=I{|i−j|≤t},for 1≤i,j≤n. (77)

Since is independent of , as we are under the null, we have that has the same distribution as when

is a uniformly distributed random permutation of

. Note that

 E(An)=1nn∑i=1n∑j=1ai,j=n(2t+1)−t(t+1)n=nu(t). (78)

By [1, Cor 2.1], there is a universal constant such that, for any ,

 P(|An−E(An)|≥b) ≤c0exp⎛⎝−b2/c01n∑i,ja2i,j∨bmaxi,jai,j⎞⎠ (79) =c0exp(−b2/c0E(An)∨b), (80)

using the fact that for all . This implies that, for ,

 P0(Zn(t)≥nu(t)+q√nu(t)(1−u(t))) ≤c0exp(−q2nu(t)(1−u(t))/c0nu(t)∨q√nu(t)(1−u(t))) (81) ≤c0exp(−q/c1), (82)

for some other constant , using the fact that when , which is the range of ’s we are considering. Hence, choosing and using the union bound, we have

 P0(HCrank≥q) ≤∑t≤n/2P0(Zn(t)≥nu(t)+q√nu(t)(1−u(t))) (83) ≤(n/2+1)c0exp(−q/c1)≍1/n→0. (84)

We now consider the alternative , and show that in probability under the stated condition. Let

 Qn(t)=Zn(t)−nu(t)√nu(t)(1−u(t)). (85)

Since , it suffices to find some in that range such that in probability. For that, define the empirical distributions and . Note that, by definition, and