1 Introduction
Testing hypotheses simultaneously is a frequent issue in statistical practice, e.g. in genomic research. A widely used criterion for deciding which of these hypotheses should be rejected is the socalled ”false discovery rate” (FDR) promoted by Benjamini and Hochberg [3]. The FDR is the expectation of the ”false discovery proportion” (FDP), the ratio of the number of false rejections and all rejections . Let a level be given. Under the socalled basic independence (BI) assumption we have FDR for the classical Benjamini and Hochberg linear stepup test, briefly denoted by BH test. Here, is the unknown amount of true null hypotheses. To achieve a higher power it is of great interest to exhaust the FDR as good as possible. Especially, if is not close to there is space for improvement of the BH test. That is why since the beginning of this century the interest of adaptive tests grows. The idea is to estimate by an appropriate estimator in a first step and to apply the BH test for the (data depended) level
in the second step. Heuristically, we obtain for a good estimator
that FDR. Benjamini and Hochberg [4] suggested an early estimator for leading to an FDR controlling test. Before them Schweder and Spjøtvoll [31] already discussed estimators for using plots of the empirical distribution function of the values. The number of estimators suggested in the literature is huge, here only a short selection: Benjamini et al. [5], Blanchard and Roquain [7, 8] and Zeisel et al. [36]. We want to emphasize the papers of the Storey [33] and Storey et al. [34] and, in particular, the Storey estimator based on a tuning parameter . We refer to Storey and Tibshirani [35] for a discussion of the adjustment of the tuning parameter . Generalized Storey estimators with data dependent weights, which were already discussed by Heesen and Janssen [21], will be our prime example for our general results. A nice property of them is that we have finite sample FDR control, see [21]. Sufficient conditions for finite sample FDR control on general estimators can be found in Sarkar [28] and Heesen and Janssen [20, 21].Beside the FDR control there are also other control criteria, for example the familywise error rate FWER. Also for the control of FWER adaptive tests, i.e. tests using an (plugin) estimator for , are used and discussed in the literature, see e.g. Finner and Gontscharuk [14] and Sarkar et al. [29].
Stochastic process methods were applied to study the asymptotic behavior of the FDP, among others to calculate asymptotic confidence intervals, and the familywise error rate (FWER) in detail, see
Genovese and Wassermann [17], Meinshausen and Bühlmann [24], Meinshausen and Rice [25] and Neuvial [26]. Dealing with a huge amount of values the fluctuation of the FDP becomes, of course, relevant. Ferreira and Zwinderman [12] presented formulas for higher moments of FDP for the BH test and Roquain and Villers [27] did so for stepup and stepdown tests with general (but data independent) critical values. We generalize these formulas to adaptive stepup tests using general estimators for . In particular, we derive an exact finite sample formula for the variability of FDP. As an application of this we discuss the consistency of FDP and present sufficient and necessary conditions for it. We also discuss the more challenging case of sparsity in the sense that as . This situation can be compared to the one of Abramovich et al. [1], who derived an estimator of the (sparse) mean of a multivariate normal distribution using FDR procedures.
Outline of the results. In Section 2 we introduce the model as well as the adaptive stepup tests, and in particular the generalized Storey estimators which serve as prime examples. Section 3 provides exact finite sample variance formulas for the FDP under the BI model. Extensions to higher moments can be found in the appendix, see Section 9. These results apply to the variability and the consistency of FDP, see Section Section 4. Roughly speaking we have consistency if we have stable estimators and the number of rejections tends to infinity. Section 5 is devoted to concrete adaptive stepup tests mainly based on the convex combinations of generalized Storey estimators with data dependent weights. We will see that consistency cannot be achieved in general. Under mild assumptions the adaptive tests based on the estimators mentioned above are superior compared to the BH test: The FDR is more exhausted but remains finitely controlled by the level . Furthermore, they are consistent at least when the BH test is consistent. In Section 6 we discuss least favorable configurations which serve as useful technical tools. For the reader’s convenience we add a discussion and summary of the paper in Section 7. All proofs are collected in Section 8.
2 Preliminaries
2.1 The model and general stepup tests
Let us first describe the model and the procedures. A multiple testing problem consists on null hypotheses with associated value on a common probability space . We will always use the basic independence (BI) assumption given by

The set of hypotheses can be divided in the disjoint union of unknown portions of true null and false null , respectively. Denote by the cardinality of for .

The vectors of
values and are independent, where each dependence structure is allowed for the values for the false hypotheses.
Throughout the paper let be nonrandom. As in Heesen and Janssen [21] the results can be extended to more general models with random by conditioning under . By using this modification the results easily carry over to familiar mixture models discussed, for instance, by Abramovich et al. [1] and Genovese and Wassermann [17]. We study adaptive multiple stepup tests with estimated critical values extending the famous Benjamini and Hochberg [3] stepup test, briefly denoted by BH test. In the following we recall the definition of this kind of tests. Let
(2.1) 
denote possibly data dependent critical values. As an example for the critical values we recall the ones for the BH test, which do not depend on the data:
If denote the ordered value then the number of rejections is given by
and the multiple procedure acts as follows:
Moreover, let
(2.2) 
be the number of falsely rejected null hypothesis. Then the false discovery rate
and the false discovery proportion are given by(2.3) 
Good multiple tests like the BH test or the frequently applied adaptive test of Storey et al. [34] control the FDR at a prespecified acceptance error bound at least under the BI assumption. Besides the control, two further aspects are of importance and discussed below:

To make the test sensitive for signal detection the FDR should exhaust the level as best as possible.

On the other hand the variability of the FDP, see (2.3), is of interest in order to judge the stability of the test.
For a large class of adaptive tests exact FDR formulas were established in Heesen and Janssen [21]. Here the reader can find new controlling tests with high FDR. These formulas are now completed by formulas for exact higher FDP moments and, in particular, for the variance. These results open the door for a discussion about the consistency of multiple tests, i.e.
(2.4) 
Specific results are discussed in Sections 5 and 4. If then we have the following necessary condition for consistency:
(2.5) 
As already stated we can not expect consistency in general. In the following we discuss the BH test for two extreme case.
Example 2.1

Let be fixed. Then is minimal for the socalled Dirac uniform configuration DU, where all entries of are equal to zero. Under this configuration in distribution with
see Finner and Roters [16] and Theorem 4.8 of Scheer [30]
. The limit variable belongs to the class of linear Poisson distributions, see
Finner et al. [15], Jain [22] and Consul and Famoye [10]. Hence, the BHtest is never consistent under BI for fixed since (2.5) is violated. 
Another extreme case is given by i.i.d. distributed value . Suppose that , , is uniformly distributed on , where . Then the BHtests are not consistent, see Theorem 5.11(b).
More information about DU and least favorable configurations can be found in Section 6. The requirement for consistency will be somehow in between these two extreme cases where the assumption will be always needed.
2.2 Our stepup tests and underlying assumptions
In the following we introduce the adaptive stepup tests we consider in this paper. Let be a fixed level and let , , be a tuning parameter and we agree that no null with should be rejected. The latter is not restrictive for practice since it is very unusual to reject a null if the corresponding value exceeds, for instance, . We divide the range of the values in a decision region , where all with have a chance to be rejected, and an estimation region , where are used to estimate , see Figure 1.
To be more specific we consider estimators of the form
(2.6) 
for estimating , which are measurable functions depending only on . As usual we denote by the empirical distribution function of the values . As motivated in the introduction we now plugin these estimators in the BH test. Doing this we obtain the data driven critical values
(2.7) 
where we promote to use the upper bound as Heesen and Janssen [21] already did. The following two quantities will be rapidly used: Through
(2.8) 
Throughout this paper, we investigate different mild assumptions. For our main results we fix the following two:

Suppose that

Suppose that is always positive and
If only is valid then our results apply to appropriate subsequences. The most interesting case is since otherwise (if ) the FDR can be controlled, i.e. , by rejecting everything.
Remark 2.2
A prominent example for an adaptive test controlling the FDR by is given by the Storey estimator (2.11):
(2.10)  
(2.11) 
A refinement was established by Heesen and Janssen [21]. They introduced a couple of inspection points , where is estimated on each interval . As motivation for this idea observe that the Storey estimator can be rewritten as the following linear combination
with weights , where . The ingredients
(2.12) 
are also estimators for , which were used by Liang and Nettleton [23] in another context. Under BI the following theorem was proved by Heesen and Janssen [21]. A discussion of their consistency is given in Section 5.
Theorem 2.3 (cf. Thm 10 in [21])
Let be random weights for with . The adaptive stepup tests using the estimator
(2.13) 
controls the FDR, i.e. .
3 Moments
This section provides exact second moment formulas of for our adaptive stepup tests for a fixed regime . Our method of proof relies on conditioning with respect to the algebra
Conditional under the (nonobservable) algebra the quantities , and are fixed values. But only and are given by the data and observable. The FDR formula (2.9) is now completed by an exact variance formula. The proof offers also a rapid approach to the known variance formula of Ferreira and Zwinderman [12] for the Benjamini and Hochberg test (with and ). Without loss of generality we can assume that is ordered by
Now, we introduce a new value vector . If then set equal to but replace one value with by for one , for convenience take the smallest integer with this property. If then set . Moreover, let be the number of rejections of the adaptive test for substituted vector of value. Note that remains unchanged when considering instead of .
Theorem 3.5
Suppose that our assumptions (A2) are fulfilled:

The second moment of is given by

The variance of FDP fulfils

We have
Exact higher moment formulas are established in the appendix, see Section 9.
4 The variability of and the Consistency of adaptive multiple tests
The exact variance formula applies to the stability of the FDP and its consistency if tends to infinity. If not stated otherwise, all limits are meant as . In the following we need a further mild assumption:

There is some such that for all .
Clearly, (A3) is fulfilled for the trivial estimator and for all generalized weighted estimators of the form (2.13) with . Note that (A1) and (A3) imply and, hence, (2.5) is a necessary condition for consistency in this case. In the following we give boundaries for the variance of depending on the leading term in the variance formula of Theorem 3.5:
Since under (A1) we have consistency iff . In the following we present sufficient and necessary conditions for this.
Theorem 4.7
Roughly speaking the consistency requires an amount of rejections (4.4) turning to infinity and a stability condition (4.3) for the estimator , which is equivalent to .
Remark 4.8
Suppose that (A1)(A3) are fulfilled.

implies
(4.5) 
Under (4.5) we have and in probability and so by Theorem 3.5(c).
Under mild additional assumptions we can improve the convergence in expectation from Remark 4.8(b). Recall that and depend, of course, on the prespecified level . In comparison to the rest of this paper we consider in the following theorem we consider more than one level. That is why we prefer (only) for this theorem the notation and
Theorem 4.9
The next example points out that consistency may depend on the level and adaptive tests may be consistent while the BH test is not so. A proof of the statements is given in Section 8.
Example 4.10
Let be i.i.d. uniformly distributed on . Consider , and values from the false null given by with , . The BH test BH() with level is not consistent while BH is consistent. But the adaptive test Stor using the classical Storey estimator (2.10) is consistent.
5 Consistent and inconsistent regimes
Below we will exclude the ugly estimator which could lead to rejecting all hypotheses with . To avoid this effect let us introduce:

There exists a constant with
Note that (A4) guarantees that (A2) holds at least with probability tending to one. The next theorem yields a necessary condition for consistency.
Theorem 5.11
Consistency for the case was already discussed by Genovese and Wassermann [17], who used a stochastic process approach. Also Ferreira and Zwinderman [12] used their formulas for the moments of to discuss the consistency for the BH test. By their Proposition 2.2 or our Theorem 4.7 it is sufficient to show for that in probability. For this purpose Ferreira and Zwinderman [12] found conditions such that in probability. The sparse signal case is more delicate since always tends to even for adaptive tests. Recall for the following lemma that is the largest intersection point of and the random Simes line , observe .
Lemma 5.12
Besides the result of Theorem 5.11 we already know that a further necessary condition for consistency is (4.3) which is assumed to be fulfilled in the following. Turning to convergent subsequence we can assume without loss of generality under (A1), (A3) and (A4) that . In this case (4.3) is equivalent to
(5.1) 
In the following we will rather work with (5.1) instead of (4.3). Due to Lemma 5.12 the question about consistency can be reduced in the sparse signal case to the comparison of the random Simes line defined above and close to .
Theorem 5.13
Assume that (A1), (A3), (A4) and (5.1) hold. Let and be some sequence in such that and
(5.2) 
where , denotes for the empirical distribution function of the values corresponding to the true and false null, respectively.
Then in probability and so we have consistency by Theorem 4.7.
Remark 5.14

Suppose that are i.i.d. with distribution function . Then the statement of Theorem 5.13 remains valid if we replace (5.2) by the condition that for all sufficiently large
(5.3) A proof of this statement is given in Section 8.

In the case we need a sequence tending to .

For the DUconfiguration the assumption (5.2) is fulfilled for as long as the necessary condition holds.
As already stated, consistency only holds under certain additional assumptions. In the following we compare consistency of the classical BH test and adaptive tests with an appropriate estimator.
Lemma 5.15
Under some mild assumptions Lemma 5.15 is applicable for the weighted estimator (2.13), see Corollary 5.16(c) for sufficient conditions.
5.1 Combination of generalized Storey estimators
In the following we become more concrete by discussing the combined Storey estimators (2.13) introduced in Section 1. For this purpose we need the following assumption to ensure that (A4) is fulfilled.

Suppose that .
Corollary 5.16
Let (A1), (A5) and the assumptions of Theorem 2.3 be fulfilled. Consider the adaptive multiple test with .

Suppose that . Then (5.1) holds with and
(5.5) 
Suppose that and we have with probability one that
(5.6) for every . Moreover, assume that
(5.7) and all . If there is some and such that
(5.8) where , then we have an improvement of the asymptotically compared to the BenjaminiHochberg procedure, i.e.
(5.9) 
(Consistency) Suppose that the weights are asymptotically constant, i.e. a.s. for all , and fulfil (5.6). Assume that
(5.10) and for all . Moreover, suppose that
(5.11) for some , where . Additionally, assume if . Then (5.1) holds for some and consistency of the BH test implies consistency of the adaptive test. Moreover, if (5.2) holds for and a sequence with then we have always consistency of the adaptive test.
It is easy to see that the assumptions of (c) imply the ones of (b). Typically, the values , , from the false null are stochastically smaller than the uniform distribution, i.e. for all (with strict inequality for some ). This may lead to (5.7) or (5.10).
Remark 5.17
5.2 Asymptotically optimal rejection curve
Our results can be transferred to general deterministic critical values (2.1), which are not of the form (2.7) and do not use a plugin estimator for . To label this case we use . Analogously to Section 4 and Section 9 we define for by setting values from the true null to . By the same arguments as in the proof of Theorem 3.5 we obtain
The first formula can also be found in Benditkis et al. [2], see the proof of Theorem 2 therein. The proof of the second one is left to the reader. By these formulas we can now treat an important class of critical values given by
(5.12) 
A necessary condition for the valid stepup tests is . This condition holds for the critical values (5.12) if
(5.13) 
These critical values are closely related to
(5.14) 
of the asymptotically optimal rejection curve introduced by Finner et al. [13]. Note that the case is excluded on purpose because it would lead to . The remaining coefficient has to be defined separately such that , see Finner et al. [13] and Gontscharuk [18] for a detailed discussion. It is wellknown that neither for (5.12) with and nor for (5.14) we have control of the FDR by over all BI models simultaneously. This follows from Lemma 4.1 of Heesen and Janssen [20] since . However, Heesen and Janssen [20] proved that for all fixed , and there exists a unique parameter such that
where the supremum is taken over all BI models at sample size . The value may be found under the least favorable configuration DU using numerical methods.
By transferring our techniques to this type of critical values we get the following sufficient and necessary conditions for consistency.
6 Least favorable configurations and consistency
Below least favorable configurations (LFC) are derived for the value of the false portion. When deterministic critical values are increasing then the FDR is decreasing in each argument , , for fixed , see Benjamini and Yekutieli [6] or Benditkis et al. [2] for a short proof. Here and subsequently, we use ”increasing” and ”decreasing” in their weak form, i.e. equality is allowed, whereas other authors use ”nondecreasing” and ”nonincreasing” for this purpose. In that case the Dirac uniform configuration DU, see Example 2.1, has maximum FDR, i.e. it is LFC. LFC are sometimes useful tools for all kind of proofs.
Remark 6.19
Our exact moment formulas provide various LFCresults which are collected below. To formulate these we introduce a new assumption

Let be increasing for each coordinate .
Below we are going to condition on . By (BI2) we may write , where represents the distribution of under for , and .
Theorem 6.20 (LFC for adaptive tests)
Suppose that (A2) is fulfilled. Define the vector .

(Conditional LFC)

The conditional FDR conditioned on
only depends on the portion

Comments
There are no comments yet.