On the consistency of adaptive multiple tests

01/08/2018 ∙ by Marc Ditzhaus, et al. ∙ Heinrich Heine Universität Düsseldorf 0

Much effort has been done to control the "false discovery rate" (FDR) when m hypotheses are tested simultaneously. The FDR is the expectation of the "false discovery proportion" FDP=V/R given by the ratio of the number of false rejections V and all rejections R. In this paper, we have a closer look at the FDP for adaptive linear step-up multiple tests. These tests extend the well known Benjamini and Hochberg test by estimating the unknown amount m_0 of the true null hypotheses. We give exact finite sample formulas for higher moments of the FDP and, in particular, for its variance. Using these allows us a precise discussion about the consistency of adaptive step-up tests. We present sufficient and necessary conditions for consistency on the estimators m_0 and the underlying probability regime. We apply our results to convex combinations of generalized Storey type estimators with various tuning parameters and (possibly) data-driven weights. The corresponding step-up tests allow a flexible adaptation. Moreover, these tests control the FDR at finite sample size. We compare these tests to the classical Benjamini and Hochberg test and discuss the advantages of it.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Testing hypotheses simultaneously is a frequent issue in statistical practice, e.g. in genomic research. A widely used criterion for deciding which of these hypotheses should be rejected is the so-called ”false discovery rate” (FDR) promoted by Benjamini and Hochberg [3]. The FDR is the expectation of the ”false discovery proportion” (FDP), the ratio of the number of false rejections and all rejections . Let a level be given. Under the so-called basic independence (BI) assumption we have FDR for the classical Benjamini and Hochberg linear step-up test, briefly denoted by BH test. Here, is the unknown amount of true null hypotheses. To achieve a higher power it is of great interest to exhaust the FDR as good as possible. Especially, if is not close to there is space for improvement of the BH test. That is why since the beginning of this century the interest of adaptive tests grows. The idea is to estimate by an appropriate estimator in a first step and to apply the BH test for the (data depended) level

in the second step. Heuristically, we obtain for a good estimator

that FDR. Benjamini and Hochberg [4] suggested an early estimator for leading to an FDR controlling test. Before them Schweder and Spjøtvoll [31] already discussed estimators for using plots of the empirical distribution function of the -values. The number of estimators suggested in the literature is huge, here only a short selection: Benjamini et al. [5], Blanchard and Roquain [7, 8] and Zeisel et al. [36]. We want to emphasize the papers of the Storey [33] and Storey et al. [34] and, in particular, the Storey estimator based on a tuning parameter . We refer to Storey and Tibshirani [35] for a discussion of the adjustment of the tuning parameter . Generalized Storey estimators with data dependent weights, which were already discussed by Heesen and Janssen [21], will be our prime example for our general results. A nice property of them is that we have finite sample FDR control, see [21]. Sufficient conditions for finite sample FDR control on general estimators can be found in Sarkar [28] and Heesen and Janssen [20, 21].
Beside the FDR control there are also other control criteria, for example the family-wise error rate FWER. Also for the control of FWER adaptive tests, i.e. tests using an (plug-in) estimator for , are used and discussed in the literature, see e.g. Finner and Gontscharuk [14] and Sarkar et al. [29].

Stochastic process methods were applied to study the asymptotic behavior of the FDP, among others to calculate asymptotic confidence intervals, and the familywise error rate (FWER) in detail, see

Genovese and Wassermann [17], Meinshausen and Bühlmann [24], Meinshausen and Rice [25] and Neuvial [26]. Dealing with a huge amount of -values the fluctuation of the FDP becomes, of course, relevant. Ferreira and Zwinderman [12] presented formulas for higher moments of FDP for the BH test and Roquain and Villers [27] did so for step-up and step-down tests with general (but data independent) critical values. We generalize these formulas to adaptive step-up tests using general estimators for . In particular, we derive an exact finite sample formula for the variability of FDP. As an application of this we discuss the consistency of FDP and present sufficient and necessary conditions for it. We also discuss the more challenging case of sparsity in the sense that as . This situation can be compared to the one of Abramovich et al. [1]

, who derived an estimator of the (sparse) mean of a multivariate normal distribution using FDR procedures.

Outline of the results. In Section 2 we introduce the model as well as the adaptive step-up tests, and in particular the generalized Storey estimators which serve as prime examples. Section 3 provides exact finite sample variance formulas for the FDP under the BI model. Extensions to higher moments can be found in the appendix, see Section 9. These results apply to the variability and the consistency of FDP, see Section Section 4. Roughly speaking we have consistency if we have stable estimators and the number of rejections tends to infinity. Section 5 is devoted to concrete adaptive step-up tests mainly based on the convex combinations of generalized Storey estimators with data dependent weights. We will see that consistency cannot be achieved in general. Under mild assumptions the adaptive tests based on the estimators mentioned above are superior compared to the BH test: The FDR is more exhausted but remains finitely controlled by the level . Furthermore, they are consistent at least when the BH test is consistent. In Section 6 we discuss least favorable configurations which serve as useful technical tools. For the reader’s convenience we add a discussion and summary of the paper in Section 7. All proofs are collected in Section 8.

2 Preliminaries

2.1 The model and general step-up tests

Let us first describe the model and the procedures. A multiple testing problem consists on null hypotheses with associated -value on a common probability space . We will always use the basic independence (BI) assumption given by

  1. The set of hypotheses can be divided in the disjoint union of unknown portions of true null and false null , respectively. Denote by the cardinality of for .

  2. The vectors of

    -values and are independent, where each dependence structure is allowed for the -values for the false hypotheses.

  3. The -values

    of the true null are independent and uniformly distributed on

    , i.e. for all .

Throughout the paper let be nonrandom. As in Heesen and Janssen [21] the results can be extended to more general models with random by conditioning under . By using this modification the results easily carry over to familiar mixture models discussed, for instance, by Abramovich et al. [1] and Genovese and Wassermann [17]. We study adaptive multiple step-up tests with estimated critical values extending the famous Benjamini and Hochberg [3] step-up test, briefly denoted by BH test. In the following we recall the definition of this kind of tests. Let

(2.1)

denote possibly data dependent critical values. As an example for the critical values we recall the ones for the BH test, which do not depend on the data:

If denote the ordered -value then the number of rejections is given by

and the multiple procedure acts as follows:

Moreover, let

(2.2)

be the number of falsely rejected null hypothesis. Then the false discovery rate

and the false discovery proportion are given by

(2.3)

Good multiple tests like the BH test or the frequently applied adaptive test of Storey et al. [34] control the FDR at a pre-specified acceptance error bound at least under the BI assumption. Besides the control, two further aspects are of importance and discussed below:

  1. To make the test sensitive for signal detection the FDR should exhaust the level as best as possible.

  2. On the other hand the variability of the FDP, see (2.3), is of interest in order to judge the stability of the test.

For a large class of adaptive tests exact FDR formulas were established in Heesen and Janssen [21]. Here the reader can find new -controlling tests with high FDR. These formulas are now completed by formulas for exact higher FDP moments and, in particular, for the variance. These results open the door for a discussion about the consistency of multiple tests, i.e.

(2.4)

Specific results are discussed in Sections 5 and 4. If then we have the following necessary condition for consistency:

(2.5)

As already stated we can not expect consistency in general. In the following we discuss the BH test for two extreme case.

Example 2.1
  1. Let be fixed. Then is minimal for the so-called Dirac uniform configuration DU, where all entries of are equal to zero. Under this configuration in distribution with

    see Finner and Roters [16] and Theorem 4.8 of Scheer [30]

    . The limit variable belongs to the class of linear Poisson distributions, see

    Finner et al. [15], Jain [22] and Consul and Famoye [10]. Hence, the BH-test is never consistent under BI for fixed since (2.5) is violated.

  2. Another extreme case is given by i.i.d. distributed -value . Suppose that , , is uniformly distributed on , where . Then the BH-tests are not consistent, see Theorem 5.11(b).

More information about DU and least favorable configurations can be found in Section 6. The requirement for consistency will be somehow in between these two extreme cases where the assumption will be always needed.

2.2 Our step-up tests and underlying assumptions

In the following we introduce the adaptive step-up tests we consider in this paper. Let be a fixed level and let , , be a tuning parameter and we agree that no null with should be rejected. The latter is not restrictive for practice since it is very unusual to reject a null if the corresponding -value exceeds, for instance, . We divide the range of the -values in a decision region , where all with have a chance to be rejected, and an estimation region , where are used to estimate , see Figure 1.

Figure 1: Decision region (dashed) and estimation region .

To be more specific we consider estimators of the form

(2.6)

for estimating , which are measurable functions depending only on . As usual we denote by the empirical distribution function of the -values . As motivated in the introduction we now plug-in these estimators in the BH test. Doing this we obtain the data driven critical values

(2.7)

where we promote to use the upper bound as Heesen and Janssen [21] already did. The following two quantities will be rapidly used: Through

(2.8)

Throughout this paper, we investigate different mild assumptions. For our main results we fix the following two:

  1. Suppose that

  2. Suppose that is always positive and

If only is valid then our results apply to appropriate subsequences. The most interesting case is since otherwise (if ) the FDR can be controlled, i.e. , by rejecting everything.

Remark 2.2
  1. Under (A2) the FDR of the adaptive multiple test was obtained for the BI model by Heesen and Janssen [21]:

    (2.9)

    In particular, we obtain

    where the upper bound is always strictly smaller than for finite .

  2. If (A2) is not fulfilled then consider the estimator instead of . Note that both estimators lead to the same critical values and so the assumption (A2) is not restrictive.

A prominent example for an adaptive test controlling the FDR by is given by the Storey estimator (2.11):

(2.10)
(2.11)

A refinement was established by Heesen and Janssen [21]. They introduced a couple of inspection points , where is estimated on each interval . As motivation for this idea observe that the Storey estimator can be rewritten as the following linear combination

with weights , where . The ingredients

(2.12)

are also estimators for , which were used by Liang and Nettleton [23] in another context. Under BI the following theorem was proved by Heesen and Janssen [21]. A discussion of their consistency is given in Section 5.

Theorem 2.3 (cf. Thm 10 in [21])

Let be random weights for with . The adaptive step-up tests using the estimator

(2.13)

controls the FDR, i.e. .

Finally, we want to present a necessary condition of asymptotic FDR control. It was proven by Heesen and Janssen [20] for a greater class than the BI models, namely reverse martingals. The same condition was already used by Finner and Gontscharuk [14] for asymptotic FWER control.

Theorem 2.4 (cf. Thm 6.1 in [20])

Suppose that (A1), (A2) holds. If

then we have asymptotic FDR control, i.e. .

3 Moments

This section provides exact second moment formulas of for our adaptive step-up tests for a fixed regime . Our method of proof relies on conditioning with respect to the -algebra

Conditional under the (non-observable) -algebra the quantities , and are fixed values. But only and are given by the data and observable. The FDR formula (2.9) is now completed by an exact variance formula. The proof offers also a rapid approach to the known variance formula of Ferreira and Zwinderman [12] for the Benjamini and Hochberg test (with and ). Without loss of generality we can assume that is ordered by

Now, we introduce a new -value vector . If then set equal to but replace one -value with by for one , for convenience take the smallest integer with this property. If then set . Moreover, let be the number of rejections of the adaptive test for substituted vector of -value. Note that remains unchanged when considering instead of .

Theorem 3.5

Suppose that our assumptions (A2) are fulfilled:

  1. The second moment of is given by

  2. The variance of FDP fulfils

  3. We have

Exact higher moment formulas are established in the appendix, see Section 9.

4 The variability of and the Consistency of adaptive multiple tests

The exact variance formula applies to the stability of the FDP and its consistency if tends to infinity. If not stated otherwise, all limits are meant as . In the following we need a further mild assumption:

  1. There is some such that for all .

Clearly, (A3) is fulfilled for the trivial estimator and for all generalized weighted estimators of the form (2.13) with . Note that (A1) and (A3) imply and, hence, (2.5) is a necessary condition for consistency in this case. In the following we give boundaries for the variance of depending on the leading term in the variance formula of Theorem 3.5:

Lemma 4.6

Suppose that (A2) is fulfilled.

  1. We have

    (4.1)
    (4.2)
  2. Suppose (A3). Then with and for all

Since under (A1) we have consistency iff . In the following we present sufficient and necessary conditions for this.

Theorem 4.7

Under (A1)-(A3) the following (a) and (b) are equivalent.

  1. (Consistency) We have in -probability.

  2. It holds that

    (4.3)
    (4.4)

Roughly speaking the consistency requires an amount of rejections (4.4) turning to infinity and a stability condition (4.3) for the estimator , which is equivalent to .

Remark 4.8

Suppose that (A1)-(A3) are fulfilled.

  1. implies

    (4.5)
  2. Under (4.5) we have and in -probability and so by Theorem 3.5(c).

Under mild additional assumptions we can improve the convergence in expectation from Remark 4.8(b). Recall that and depend, of course, on the pre-specified level . In comparison to the rest of this paper we consider in the following theorem we consider more than one level. That is why we prefer (only) for this theorem the notation and

Theorem 4.9

Suppose (A1)-(A3). Moreover, we assume that we have consistency for all level and some . Then we have in -probability for all that

The next example points out that consistency may depend on the level and adaptive tests may be consistent while the BH test is not so. A proof of the statements is given in Section 8.

Example 4.10

Let be i.i.d. uniformly distributed on . Consider , and -values from the false null given by with , . The BH test BH() with level is not consistent while BH is consistent. But the adaptive test Stor using the classical Storey estimator (2.10) is consistent.

5 Consistent and inconsistent regimes

Below we will exclude the ugly estimator which could lead to rejecting all hypotheses with . To avoid this effect let us introduce:

  1. There exists a constant with

Note that (A4) guarantees that (A2) holds at least with probability tending to one. The next theorem yields a necessary condition for consistency.

Theorem 5.11

Suppose that and (A4) holds.

  1. If we have consistency then .

  2. Suppose that (A1) holds. If all are i.i.d. uniformly distributed on then we have no consistency.

Consistency for the case was already discussed by Genovese and Wassermann [17], who used a stochastic process approach. Also Ferreira and Zwinderman [12] used their formulas for the moments of to discuss the consistency for the BH test. By their Proposition 2.2 or our Theorem 4.7 it is sufficient to show for that in -probability. For this purpose Ferreira and Zwinderman [12] found conditions such that in -probability. The sparse signal case is more delicate since always tends to even for adaptive tests. Recall for the following lemma that is the largest intersection point of and the random Simes line , observe .

Lemma 5.12

Suppose that (A1) with and (A4) are fulfilled. Then in -probability. In particular, under (A3) we have in -probability.

Besides the result of Theorem 5.11 we already know that a further necessary condition for consistency is (4.3) which is assumed to be fulfilled in the following. Turning to convergent subsequence we can assume without loss of generality under (A1), (A3) and (A4) that . In this case (4.3) is equivalent to

(5.1)

In the following we will rather work with (5.1) instead of (4.3). Due to Lemma 5.12 the question about consistency can be reduced in the sparse signal case to the comparison of the random Simes line defined above and close to .

Theorem 5.13

Assume that (A1), (A3), (A4) and (5.1) hold. Let and be some sequence in such that and

(5.2)

where , denotes for the empirical distribution function of the -values corresponding to the true and false null, respectively.
Then in -probability and so we have consistency by Theorem 4.7.

Remark 5.14
  1. Suppose that are i.i.d. with distribution function . Then the statement of Theorem 5.13 remains valid if we replace (5.2) by the condition that for all sufficiently large

    (5.3)

    A proof of this statement is given in Section 8.

  2. In the case we need a sequence tending to .

  3. For the DU-configuration the assumption (5.2) is fulfilled for as long as the necessary condition holds.

As already stated, consistency only holds under certain additional assumptions. In the following we compare consistency of the classical BH test and adaptive tests with an appropriate estimator.

Lemma 5.15

Suppose that (A1), (A3) and (A4) are fulfilled. Assume that (5.1) holds for some . If then additionally suppose that

(5.4)

Then consistency of the BH test implies consistency of the adaptive test.

Under some mild assumptions Lemma 5.15 is applicable for the weighted estimator (2.13), see Corollary 5.16(c) for sufficient conditions.

5.1 Combination of generalized Storey estimators

In the following we become more concrete by discussing the combined Storey estimators (2.13) introduced in Section 1. For this purpose we need the following assumption to ensure that (A4) is fulfilled.

  1. Suppose that .

Corollary 5.16

Let (A1), (A5) and the assumptions of Theorem 2.3 be fulfilled. Consider the adaptive multiple test with .

  1. Suppose that . Then (5.1) holds with and

    (5.5)
  2. Suppose that and we have with probability one that

    (5.6)

    for every . Moreover, assume that

    (5.7)

    and all . If there is some and such that

    (5.8)

    where , then we have an improvement of the asymptotically compared to the Benjamini-Hochberg procedure, i.e.

    (5.9)
  3. (Consistency) Suppose that the weights are asymptotically constant, i.e. a.s. for all , and fulfil (5.6). Assume that

    (5.10)

    and for all . Moreover, suppose that

    (5.11)

    for some , where . Additionally, assume if . Then (5.1) holds for some and consistency of the BH test implies consistency of the adaptive test. Moreover, if (5.2) holds for and a sequence with then we have always consistency of the adaptive test.

It is easy to see that the assumptions of (c) imply the ones of (b). Typically, the -values , , from the false null are stochastically smaller than the uniform distribution, i.e. for all (with strict inequality for some ). This may lead to (5.7) or (5.10).

Remark 5.17

If are i.i.d. with distribution function such that for all . Then (5.7) and (5.10) are fulfilled. Moreover, if and then .

If the weights are deterministic then weights fulfilling (5.6) produce convex combinations of Storey estimators with different tuning parameters , compare to (2.10)-(2.12).

5.2 Asymptotically optimal rejection curve

Our results can be transferred to general deterministic critical values (2.1), which are not of the form (2.7) and do not use a plug-in estimator for . To label this case we use . Analogously to Section 4 and Section 9 we define for by setting -values from the true null to . By the same arguments as in the proof of Theorem 3.5 we obtain

The first formula can also be found in Benditkis et al. [2], see the proof of Theorem 2 therein. The proof of the second one is left to the reader. By these formulas we can now treat an important class of critical values given by

(5.12)

A necessary condition for the valid step-up tests is . This condition holds for the critical values (5.12) if

(5.13)

These critical values are closely related to

(5.14)

of the asymptotically optimal rejection curve introduced by Finner et al. [13]. Note that the case is excluded on purpose because it would lead to . The remaining coefficient has to be defined separately such that , see Finner et al. [13] and Gontscharuk [18] for a detailed discussion. It is well-known that neither for (5.12) with and nor for (5.14) we have control of the FDR by over all BI models simultaneously. This follows from Lemma 4.1 of Heesen and Janssen [20] since . However, Heesen and Janssen [20] proved that for all fixed , and there exists a unique parameter such that

where the supremum is taken over all BI models at sample size . The value may be found under the least favorable configuration DU using numerical methods.
By transferring our techniques to this type of critical values we get the following sufficient and necessary conditions for consistency.

Lemma 5.18

Let (A1) be fulfilled. Let and be sequences in such that and fulfil (5.13) for every . For every consider the step-up test with critical values given by (5.12) with .

  1. Then we have consistency, i.e. in -probability, iff the following conditions (5.15)-(5.17) hold in -probability:

    (5.15)
    (5.16)
    (5.17)
  2. If , and then (5.15) is sufficient for consistency and, moreover, in this case.

6 Least favorable configurations and consistency

Below least favorable configurations (LFC) are derived for the -value of the false portion. When deterministic critical values are increasing then the FDR is decreasing in each argument , , for fixed , see Benjamini and Yekutieli [6] or Benditkis et al. [2] for a short proof. Here and subsequently, we use ”increasing” and ”decreasing” in their weak form, i.e. equality is allowed, whereas other authors use ”nondecreasing” and ”nonincreasing” for this purpose. In that case the Dirac uniform configuration DU, see Example 2.1, has maximum FDR, i.e. it is LFC. LFC are sometimes useful tools for all kind of proofs.

Remark 6.19

In contrast to (2.7) the original Storey adaptive test is based on for the estimator from (2.12). It is known that in this situation DU is not LFC for the FDR, see Blanchard et al. [9]. However, we will see that for our modification the DU-model is LFC.

Our exact moment formulas provide various LFC-results which are collected below. To formulate these we introduce a new assumption

  1. Let be increasing for each coordinate .

Below we are going to condition on . By (BI2) we may write , where represents the distribution of under for , and .

Theorem 6.20 (LFC for adaptive tests)

Suppose that (A2) is fulfilled. Define the vector .

  1. (Conditional LFC)

    1. The conditional FDR conditioned on

      only depends on the portion