The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science

by   Preetum Nakkiran, et al.
Harvard University

Adaptive data analysis has posed a challenge to science due to its ability to generate false hypotheses on moderately large data sets. In general, with non-adaptive data analyses (where queries to the data are generated without being influenced by answers to previous queries) a data set containing n samples may support exponentially many queries in n. This number reduces to linearly many under naive adaptive data analysis, and even sophisticated remedies such as the Reusable Holdout (Dwork et. al 2015) only allow quadratically many queries in n. In this work, we propose a new framework for adaptive science which exponentially improves on this number of queries under a restricted yet scientifically relevant setting, where the goal of the scientist is to find a single (or a few) true hypotheses about the universe based on the samples. Such a setting may describe the search for predictive factors of some disease based on medical data, where the analyst may wish to try a number of predictive models until a satisfactory one is found. Our solution, the Generic Holdout, involves two simple ingredients: (1) a partitioning of the data into a exploration set and a holdout set and (2) a limited exposure strategy for the holdout set. An analyst is free to use the exploration set arbitrarily, but when testing hypotheses against the holdout set, the analyst only learns the answer to the question: "Is the given hypothesis true (empirically) on the holdout set?" -- and no more information, such as "how well" the hypothesis fits the holdout set. The resulting scheme is immediate to analyze, but despite its simplicity we do not believe our method is obvious, as evidenced by the many violations in practice. Our proposal can be seen as an alternative to pre-registration, and allows researchers to get the benefits of adaptive data analysis without the problems of adaptivity.



There are no comments yet.


page 1

page 2

page 3

page 4


Evaluating the Success of a Data Analysis

A fundamental problem in the practice and teaching of data science is ho...

How much does your data exploration overfit? Controlling bias via information usage

Modern data is messy and high-dimensional, and it is often not clear a p...

Challenges in Bayesian Adaptive Data Analysis

Traditional statistical analysis requires that the analysis process and ...

Designing Intelligent Instruments

Remote science operations require automated systems that can both act an...

The Everlasting Database: Statistical Validity at a Fair Price

The problem of handling adaptivity in data analysis, intentional or not,...

A Framework for Searching in Graphs in the Presence of Errors

We consider two types of searching models, where the goal is to design a...

Forming IDEAS Interactive Data Exploration & Analysis System

Modern cyber security operations collect an enormous amount of logging a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In science, it is natural to first collect data, and then form informed hypotheses based on it. This is arguably how much of science was historically done — after all, one is unlikely to come up with a correct theory of physics without first observing the world. In order for the result to be statistically valid, a scientist must then collect independent data, after fixing their hypothesis. However, this is not done in practice: in modern experimental science, it is common to first collect data, and then explore it to generate plausible hypotheses. This is what we consider adaptive science: wherein the scientist generates hypotheses after somehow interacting with the data set. Doing this naively could lead to being convinced of false hypotheses, since a scientist may “overfit” — that is, derive a false hypothesis that appears to be true on the data. This could occur if the scientist implicitly or explicitly tests many hypotheses before finding one that happens to fit the experimental data. For example, if the first hypothesis tested proves false on the data set, a scientist may want to use information about this first hypothesis test to revisit the data set, and form a new hypothesis, and so on.

There is growing recognition in the sciences that this “adaptive” method of doing science is not statistically sound, and leads to invalid claims. There is a recognized “reproducibility crisis” in Psychology, for example, after a collaboration failed to replicate 62 out of 97 studies with positive findings in three prominent psychology journals [8]

. The problem of adaptivity (also known as “p-hacking” or “researcher degrees of freedom”) is recognized as a key contributor to this crisis, since it is often not correctly accounted for in standard statistical analyses

[17, 22]. Other experimental areas such as neuroscience [2], economics [5], and social sciences [6]

are also aware of this issue, after conducting reproducibility studies following the example of Psychology. In general, any methodology in which a researcher decides which hypothesis to test after somehow interacting with the data set is susceptible to this problem of adaptivity. Note that this includes scenarios like testing a second hypothesis after the initial hypothesis test failed, or doing a “data-dependent analysis” in which the hypothesis formed depends on some structure of the data set itself. One notable example of such data-dependent analysis is using Principal Component Analysis on the data set to find a correlation structure, and using this in turn to define the hypothesis, that is tested against the same data set. The review article

[17] provides many further examples of this problem of adaptivity arising in science. We provide an explicit, formal example of this problem in Section 2.1.

The preceding discussion motivates our scientific goal: we would like to have a statistically-valid scientific methodology that allows researchers to explore their data before generating hypotheses. We are considering the setting where the scientist is trying to derive a small number of true hypotheses, and would like to adaptively check if the proposed hypotheses are true. We want to guarantee that in this process, false hypotheses which are proposed are unlikely to be validated as true — that is, we would like to prevent false discoveries. Before describing our proposal, we first briefly discuss three proposed solutions to the problem of adaptivity in the sciences.

1. Preregistration. There has been a recent push in the scientific community towards preregistration: requiring scientists to commit to their scientific methods and hypotheses before conducting a study. Several prominent journals now encourage scientists to preregister their research, in order to preserve statistical rigor (for example: Open Science of the Royal Society, and Psychological Science of the Association for Psychological Science). An open letter published by more than 80 signatories calls for preregistration in the sciences  [7]. Preregistration does preserve statistical validity, but at the cost of adaptivity: the researcher is not allowed to explore the structure of data to generate hypotheses, and thus preregistration has been critiqued for slowing down the overall scientific process.

2. Naive Holdout. Keeping a holdout set is another way of addressing the adaptivity problem. The idea is, the scientist holds out part of the data set (without looking at it to form hypotheses), and is free to explore the remainder of the data set. Then, after exploring and proposing a hypothesis, the scientist checks the hypothesis against the holdout set. This is statistically valid, since the holdout data is independent of the hypothesis being tested, but has the disadvantage that the holdout can only be used once: if the scientist now wants to test another hypothesis, he/she must collect additional independent data to use as a holdout. This is sample-inefficient and impractical in settings where collecting data is expensive. In particular, the naive holdout can only handle linearly many hypothesis tests in the total size of the holdout sets. Note that it is not valid to naively re-use the same holdout set after seeing the results of the first hypothesis test (say, its -value) — this can quickly lead to overfitting on the holdout set, in a way made precise in an extended example in Section 2.1.

3. Reusable Holdout. The Reusable Holdout  [11, 10], a recent development in the field of Adaptive Data Analysis, manages to improve on the naive holdout, and handles up to quadratically many hypothesis tests in the holdout size111See Section 1.1 for a more complete discussion and comparison with the Reusable Holdout.

. The insight of the Reusable Holdout was to leak less information than the naive holdout, in part by only releasing a noisy estimate of how well the hypothesis fits the holdout set (instead of, say, the exact

-value), thus preventing overfitting to the holdout set.

We extend the reusable holdout idea, and propose that the holdout set should in fact leak no information about the hypothesis test, except for what is absolutely necessary: whether the hypothesis passed or not on the holdout set. In particular, it should not release any indication of “how well” the hypothesis fits the holdout set, such as a -value. By this simple modification, our proposal allows exponentially-many adaptively-chosen false hypotheses to be invalidated, before the scientist discovers a true hypothesis. Moreover, it comes at no cost: in our scientific setting, there is no reason to leak more information from the holdout set, since ultimately we are only interested in preventing false discoveries. If we want to report -values for the confirmed hypothesis, this can be done by simply keeping another holdout set, to use after the Generic Holdout, specifically for the purpose of finding the -values for validated hypotheses that will be published.

Proposed Method: The Generic Holdout. To recap, our method is simple: first, the scientist collects the data, and keeps a holdout set for validation (without looking at it). The scientist is free to explore the rest of the data (exploration set) to come up with hypotheses that he/she deems plausible. Each time the scientist proposes a hypothesis, the validation procedure only returns “True” or “False”: whether the hypothesis passed validation on the holdout set, or not — it should not release more information, like the -value of the hypothesis. The scientist can revisit the rest of the data, and adaptively propose hypotheses, and continue until he/she proposes a hypothesis that is confirmed by validation. In general, the scientist can continue this process until a small number of hypotheses are confirmed.

We stress that our method applies specifically to the case where the scientist’s goal is to derive a small number of true hypotheses, and will stop adaptively proposing hypotheses once several of them are validated to be true.

These ideas are not technically novel, but we are not aware of the problem of how to prevent false discoveries in science being phrased and addressed with such minimal assumptions, and with the guarantees we provide.

Here we point out a benefit of the Generic Holdout, and clarify the sense in which it is “adaptive,” by contrasting it with a related method.

An alternative way of using the holdout data set to provide a statistically sound and sample efficient methodology is the following: The data analyst interacts with the exploration set in an arbitrary way, and without looking at the holdout set produces a family of hypotheses, that is then simultaneously validated against the holdout set. In fact, this scenario is technically very similar to the Generic Holdout (as described below), but practically very different.

Indeed, let us consider the following thought experiment. The scientist is using the exploration set to come up with a hypotheses, and yet whenever they have a hypothesis in mind, the scientist pretends that it has been invalidated on the holdout set, and proceeds to come up with the next hypothesis. Eventually, he/she would come up with a family of hypotheses independent of the holdout set, and could validate all of those simultaneously. This is exactly the same sequence that would be generated in the real interaction with the Generic Holdout, except potentially longer.

However, implementing the above thought experiment in practice is infeasible: generating the set of hypotheses up front requires the scientist to simulate their behavior in the (hypothetical) case that every hypothesis they propose to the holdout is false. This is infeasible in settings where generating hypotheses is very expensive (in CPU-hours, or scientist-hours), or settings where the scientist cannot properly simulate themselves. Moreover, it is unclear how many hypotheses should be generated in such a way, before moving to the validation stage. For example, consider large-scale physics experiments, where a scientific process is to first collect large amounts of data, and then explore the data to find interesting structures, and propose physical theories. Here, the process of investigating the data and coming up with theories of physics is extremely expensive. In this case, our proposal allows scientists to only invest effort in this process while they still have not derived a true hypothesis.


The primary application of the Generic Holdout, as discussed above, is to allow for adaptivity while preventing false discoveries — in particular, as an alternative to pre-registration. Here, our proposed method does not require the scientist to specify hypotheses ahead of time, before analyzing the data. Instead, the scientist only needs to specify how to determine if a hypothesis is significant or not. Using the Generic Holdout, the scientist can use any data analysis method (valid, or not) on their exploration set, and as long as they check with the holdout mechanism before publishing, they will not publish a false hypothesis except with small probability. Moreover, it is sample-efficient: they can ask up to exponentially-many false hypotheses to the holdout set, before a true hypothesis is confirmed.

Journals could even require that researchers submit their holdout set to the journal, without looking at it, and then journals implement the Generic Holdout mechanism themselves. That is, researchers (potentially interactively) submit hypotheses (with associated hypothesis tests) to the journals, which respond with a single bit.

The Generic Holdout also naturally applies to any data analysis procedure that involves several steps, each of which needs to be validated. For example, suppose the data analyst would like to first check “is the data well-clustered into 10 clusters?”, and then based on this, search for a good kernel embedding of the data, et. cetera. The Generic Holdout allows for such procedures to be statistically sound, without requiring any understanding of the exact statistical properties of the analyst’s queries.

Another application of the Generic Holdout is in fields where the existing scientific process appears to be working, but journals would like to have statistical soundness guarantees. This is especially relevant when a large data set is collected once and made public, and many research groups subsequently investigate and publish findings about the data set (for example, as in Genome-Wide Association Studies [23, 9]). Here, the proposal is: journals request some holdout data from the group initially collecting the data set, and put it in a vault (without publishing it, or looking at it). For every submitted study using the common data set, journals first do their usual review process. Then, when a paper passes their usual review, they do a final validation check on the holdout data. In this setting, the Generic Holdout guarantees that the journal can validate exponentially-many true hypotheses, and the holdout is only extinguished once it catches several false hypotheses. (Note, this is a complementary setting to the first application.)

Organization. In Section 1.1, we discuss lines of prior work on adaptivity in the sciences, related to our proposal. We formally describe our scientific setup and goals in Section 2, phrased in the language of statistical hypothesis testing. We provide an extended formal example of problems that arise due to adaptivity in data analysis in Section 2.1. In Section 3 we describe our proposed method, the Generic Holdout, and formally state its statistical guarantees. We include an example instantiation of our generic framework in Section 3.2, illustrating a common setting where exponentially-many hypotheses can be tested until several true ones are discovered.

1.1 Related Works

There are many related works surrounding the problem of adaptivity in the sciences; we discuss and compare the most relevant ones below.

Reusable Holdout.

The proposal most similar to ours is the Reusable Holdout [11, 10], which developed out of ideas from Differential Privacy [13] and Adaptive Data Analysis [12]. The Reusable Holdout addresses a similar scientific problem — preventing false discoveries in data analysis — and proposes a very similar methodology. However, there are several key differences that allow us to improve on the Reusable Holdout in our setting.

The Reusable Holdout is a mechanism for interacting with the holdout set in (informally) the following way. When the scientist proposes a hypothesis, the mechanism first checks if the hypothesis “looks similar” on the holdout and exploration sets (i.e., if they have similar -values, a measure of how well the hypothesis fit the data). If they are indeed similar, the mechanisms essentially releases the -value of the hypothesis on the exploration set, not involving the holdout data. If they are very different, the mechanism releases a noisy -value on the holdout set. This mechanism leaks information about the holdout set (a noisy -value) whenever the scientist proposes “bad” hypotheses, which are overfit to the exploration data. As a result, the Reusable Holdout can handle only quadratically-many “bad” hypotheses in the size of the holdout set. In our setting, where the scientist may use an arbitrary exploration procedure to generate hypotheses, many of the proposed hypotheses may in fact be “bad”, and the Reusable Holdout would quickly become unusable. In contrast, our method allows for up to an exponential number of “bad” hypotheses, as long as the scientist stops after discovering a few true ones.

We stress that the technical details of the Generic Holdout are not novel (e.g., the SparseValidate mechanism of [10] is essentially the complement of our mechanism), but we believe our formalization of the scientific problem is meaningful, and our proposal cleanly solves this problem.

As an aside, note that the Reusable Holdout also releases estimated -values of hypotheses, while the Generic Holdout only releases binary responses. However, to solve the scientific problem of preventing false discoveries, releasing binary responses is sufficient. Moreover, if we would like -values for the validated hypotheses, we can estimate these by simply keeping another small holdout set. Finally, the Generic Holdout allows for testing a much more general class of hypotheses than the Reusable Holdout, and does not require any specialized analysis per hypothesis class.

Adaptive Data Analysis.

The recently-developed theory of Adaptive Data Analysis [10, 12, 3, 18] also addresses issues of how to do valid, adaptive science. At a high level, the goal of Adaptive Data Analysis is much more ambitious than our goal: there, the goal (informally) is to address the question of how to form good, generalizable hypotheses based on data. In contrast, our goal is merely to prevent a scientist from being convinced of false hypotheses; we do not give a procedure for deriving true hypotheses in the first place.

More formally, in Adaptive Data Analysis we have some underlying distribution on the universe , and the scientist would like to (approximately) know the result of statistical queries for some adaptively-chosen sequence of queries . The mechanism has access only to samples , and must answer the queries with an estimate . Thus, a scientist only interacting with the data through such a mechanism will always receive answers close to the true answers on the distribution, and thus will not generate hypotheses which are overfit to the data.

The tools developed in this area give computationally-efficient mechanisms for providing such estimates. However, due to the strong guarantees provided, these mechanisms can only correctly answer quadratically-many queries in the size of the data set [3]. Moreoever, it is computationally intractable to answer more than polynomially-many queries in this setting [18].

We note that the Generic Holdout is well-suited to be used in conjunction with the methods of Adaptive Data Analysis. That is, we imagine a scientist using the tools of Adaptive Data Analysis (amongst possibly other methods) to generate hypotheses using the exploration set, and using the Generic Holdout to confirm them before publication.

Inference after Model Selection

This recent line of work  [14, 15] focuses on a specific kind of analysis procedure, which proceeds in two stages (model selection, and inference given the model). For example, if the analyst first selects influential variables via -regularized regression (“model selection”) and then forms hypotheses based on these variables (“inference”). Adaptivity arises as a problem here for the same reason, since model-selection and inference are both performed on the same data set, and thus the hypotheses are dependent on the data they are tested against. This is essentially “2 rounds of adaptivity” in our setting. For specific kinds of data distributions and model-selection procedures, these works are able to precisely analyze how the hypotheses depend on the data, and thus give bounds on the performance of the overall procedure.

Our proposal is more general, in that it allows for multiple stages of adaptivity, each stage of which could be arbitrary. For example, our proposal would allow for model-selection in multiple stages, each of which needs to be validated with respect to the population distribution (e.g., “find a good embedding, then select influential variables, then cluster according to these…”). Moreover, our proposal makes no assumptions on the data distribution or hypothesis class, and can handle cases that may be hard to fully understand in the “inference after model selection” framework. Of course, for specific cases that can be understood, this framework could lead to tighter results than our generic framework.

Adaptive FDR Control.

There is a related line of work that is interested in controlling the False-Discovery-Rate (FDR) of hypothesis testing. The setting here is as follows. We have a fixed large set of hypotheses, and we want to simultaneously test all of them on the same data set, while bounding the False-Discovery-Rate: the fraction of false hypotheses among all hypotheses that passed validation (i.e., the “false discoveries” among all discoveries). The scientific motivation here is often that we want to prune our hypotheses to a small set of “interesting” ones, on which we will then conduct further independent testing. For example, if we are interested in finding genes which cause a disease, we may first test all the hypotheses “gene X is correlated with the disease” for every value of gene X, using a testing procedure with bounded FDR. Then, among the returned hypotheses, we can do further experiments to determine their effect (say, looking physically at the mechanism of the gene expression). Here, controlling the FDR is important, since we do not want to invest too many resources into experiments which are likely to be null.

There are various proposed methods for controlling FDR in different settings [4, 1], and in particular, recently there have been proposals to control FDR by adaptively deciding the order in which to test hypotheses, based on the results of past hypothesis tests [19, 21, 20]. This could potentially yield more powerful tests, i.e. tests that are more likely to discover true hypotheses.

These works on [adaptive] FDR control operate in a different setting from our work, first because their notion of “adaptive” is different (the large set of hypotheses is usually assumed to be fixed beforehand), and second because they are interested in a different notion of error (the controlling the false-discovery rate, instead of preventing false discoveries overall). In the statistical terminology, our proposal controls the “family-wise error rate (FWER)” instead of the FDR.

2 The Scientific Framework

In this section, we define our scientific framework, in the language of hypothesis testing.

There exists some universe and underlying true distribution on , specified by Nature. (For example, could be genomic sequences, and the true distribution of human genome.)

The scientist can form hypotheses about the true distribution (eg, “gene X is correlated to disease Y”). Each hypothesis corresponds to a partition of set of all distributions into a Null class and an Alternative class . (The Null class defines distributions where the hypothesis is false, and the Alternative where the hypothesis is true).

For each hypothesis in the hypothesis class , we have a hypothesis test which takes independent samples from a distribution and is supposed to accept under distributions in and reject under those in . The false-positive probability of each test is known as its -value, and is given by

In classical (non-adaptive) science, the scientific process is: we first fix some hypothesis , and then collect independent samples from the true distribution , and run the hypothesis test .

For a single fixed hypothesis, we are usually interested in controlling the false-positive probability of the hypothesis test. This gives evidence for believing in hypotheses which pass the hypothesis test, in the following sense: Suppose a hypothesis test for hypothesis has false-positive probability . Then, if the hypothesis were false, our experimental procedure would have invalidated it with large probability .

The setting where we have a fixed set of hypotheses, and want to test them all simultaneously, is known as multiple hypothesis testing. In this setting, we could want to control different notions of error — for example, controlling the overall probability of confirming a false hypothesis, or controlling the fraction of false hypotheses among confirmed hypotheses. Throughout this work, we will consider controlling the overall probability of confirming a false hypotheses (and further, our hypotheses will be generated adaptively).

In particular, we consider the general adaptive scientific process as follows. The scientist first collects a data set of independent samples from . Then, the scientist is interested in exploring the data set to find true hypotheses, and will eventually propose a hypothesis (or small set of hypothesis) that s/he believes to be true. We would like to guarantee that the finally proposed hypotheses are in fact true — that is, we want to bound the false-positive probability of the proposed hypotheses.

The Generic Holdout is a general, sample-efficient method to achieve this.

2.1 The Problem with Adaptivity

In this section, we give an extended formal example that illustrates the problem of adaptivity in data analysis (a version of what is known a “Freedman’s paradox” [16].)

Naively, if we collect a data set, form a hypothesis based on it, and then test the hypothesis on the same data set, we lose all guarantees of correctness. This is essentially because if we are allowed to adapt to our data set (and choose among many hypotheses), we can easily “overfit” to our data set, and find some hypothesis that is true about the data but not true in Nature. As an informal example, say we collect data on a set of 20 random people. Let their set of names be . Then we form the hypothesis “At least 99% of people have names in S.” Clearly this hypothesis is well-supported by the data, but entirely false. Moreover, this hypothesis would be correctly rejected if it were formed a priori, and tested on an independent set of people.

The above problem still exists if we do not look at the data set directly, but we are allowed to adaptively choose hypotheses to test. That is, as a scientist we are not committed to a set of hypotheses beforehand, but rather we are interested in exploring the data set to find interesting structures. So we will first test some hypothesis against the data set, and then seeing the results of this test (say, its -value), we pick another hypothesis to test, and so on. In the example below, we will see that this can easily lead to a scientist being convinced of a false hypothesis , which appears to be true on the data set (i.e. passes validation with low -value). Roughly what happens is, the scientist will test a series of “weak” hypotheses, and seeing the results of these hypotheses tests, will combine them into a single “strong” hypothesis which is over-fit to the data set.

Formal Example. Let us consider the universe , and distributions over

We will form a sequence of hypothesis . Each hypothesis is of the form: is positively correlated with for . That is, each hypothesis is specified by , and the Alternative class for corresponds to distributions on for which . (The Null class for is the complement of the Alternative class).

Note that the distribution where are i.i.d. Gaussians belongs to the Null class for all hypotheses. Call this distribution the “Global Null.”

For a single, a priori fixed hypothesis , it is sufficient to take independent samples from the distribution in order to test this hypothesis with false-positive probability . That is, the hypothesis test for takes samples , and tests if the empirical correlation . This test has -value , meaning that under any distribution from the Null class of , this test rejects except with probability (said another way, is the “false-positive” probability).

Similarly, for any a priori fixed set of hypotheses, it is sufficient to take samples. In statistical parlance, this is equivalent to the “Bonferroni Procedure”, i.e. the Union Bound, which says that to test fixed hypotheses simultaneously with error level , one should test each individual hypothesis using at level .

Now, suppose we are in the Global Null distribution, and consider the following scientist who is trying to find a positive hypothesis in the class defined above. We will make only queries total, so we decide to take samples from the distribution (this is incorrect as we will see, since it assumes our queries were fixed in advance). For the first queries, the scientist tests the hypotheses for the

-th standard basis vector. Knowing the

-values from these tests, the scientist knows the empirical correlations between each of the coordinates and on the samples. Each of these empirical correlations will have magnitude in expectation. Now for the final query, the scientist checks the hypothesis . This has empirical correlation by construction, since we sum all the coordinate-wise correlations. Note that with our choice of , we have , meaning this hypothesis test passes, even though we were in the Null distribution.

Conclusions. The above shows that methods to do hypothesis-testing with a fixed set of hypothesis (eg, controlling the -values using the “Bonferroni Procedure”/union bound) can fail catastrophically when these hypotheses are chosen adaptively, knowing the results of previous hypothesis tests. In particular, a method for a priori testing may require exponentially more samples to be correct for adaptive testing. Note that this counterexample continues to hold if hypotheses are tested using cross-validation (ie, each hypothesis is tested on a different random subset of the data set.)

Looking closer at the above example, what happened is that the scientist first tested many “weak” hypotheses, which failed validation, but then combined the results of these weak hypotheses into a “strong” hypothesis, which passed validation. The Generic Holdout prevents such failures, by not releasing any additional information about weak hypotheses which do not validate.

The first (naive) example discussed in this section is a trivial manifestation of the problem with adaptivity, where the scientist is ridiculously malevolent. The second example, however, is much more enlightening and could serve as an abstraction for a mistake done by an honest, yet not careful enough scientist!

3 Proposed Method: The Generic Holdout

We propose the following scientific methodology (the “Generic Holdout”).

  1. Take independent samples, and partition them into a exploration set and a holdout set.

  2. Set aside the holdout set and never look at it directly.

  3. Use the exploration set freely, in any way, to adaptively explore and propose hypotheses.

  4. When you have a plausible hypothesis in hand, prepare a hypothesis test for , with desired

    -value, and apply this test on the holdout set, observing only the outcome of the test (whether it rejects the null hypothesis or not).

    It is crucial that the binary outcome of the test is the only information observed from the holdout set. One must not observe more information, for example the actual -value of the test on the holdout set.

  5. You are free to adaptively repeat steps 3 and 4 to discover small number of true hypotheses.

3.1 Statistical Guarantees of the Generic Holdout

Here we set up some notation regarding the methodology proposed above that will be useful in further discussion. We consider some universe , and collect a data set , assumed to be a sequence of independent samples from some underlying population

probability distribution

over . We partition it into — the holdout set, and — the exploration set. The scientist uses exploration set to propose hypotheses together with tests for each of them — each of hypotheses can depend arbitrarily on the exploration set, and on results of all the previous tests.

When a scientist commits to use this mechanism until the number of validated hypotheses exceeds specific threshold , or number of hypotheses tested altogether exceeds some specific threshold , we wish to give strong statistical guarantee on the false positive rate for validated hypotheses. We focus on the scenario where , i.e. we wish to discover only several true hypotheses, and we show that in this situation, the necessary size of the holdout set to achieve a fixed false positive probability scales gracefully with the total number of trials .

The choice of the size of exploration set is not relevant to this discussion; clearly larger exploration set makes it easier for the scientist to produce valid hypotheses in the first place, but the acquisition and maintenance of larger data set is often related with additional costs.

We will now formally define the adaptive hypothesis selection mechanism. [Adaptive hypothesis selection] We define the -bounded adaptive hypothesis selection to be a sequence of (randomized) functions such that . We think of as a randomized scheme specifying how to pick , based on the exploration set, and results of all previous hypotheses tests. We assume that after finding valid hypotheses, the researcher stops exploration, i.e. whenever there are ones among .

Our main theorem quantifies the false-positive guarantees of the generic holdout test. Consider a sequence of hypotheses generated as in Definition 3.1, that is, the scientist adaptively generates up to hypotheses, and stops once hypotheses are confirmed. If the -value of each test is bounded by , then probability of false discovery in this workflow is bounded by . More formally,

The proof of this theorem is elementary, before we proceed with it let us state explicitly important interpretation of its statement.

Discussion. In order to achieve some target statistical significance, say , over the whole process described above, we want to use holdout set such that the guaranteed false-positive probability for each specific test is of the order of . Often for standard statistical tests the required samples size scales like with the desired -value, and as such it is enough to use the holdout set of size .

To put it differently, once we have fixed holdout set of size , desired -value and bound on the number of discovered “true” hypotheses (after which we stop using collected holdout set for verification), we can issue queries in the workflow described above, and still have confidence on the validity of all discovered hypotheses.

Remark. For , this bound exactly matches the “Bonferroni Procedure” (ie, the union bound) for testing a fixed set of non-adaptive hypotheses.

Remark. The statement of the theorem remains unaffected in the complementary setting, where we expect number of rejected hypotheses to be bounded by . Here the scientist. Here, we expect scientist to use the mechanism until at most hypotheses are rejected, or at most queries are issued. In this scenario, we can again bound the probability of any false discovery by .

Remark. Note that simply providing a mechanism for validating hypotheses with small probability of false discoveries is trivial: the mechanism can just respond that every hypothesis tested is false. We would like mechanisms to also be useful, in that they allow for true discoveries. One possible formalization of usefulness guarantees of the Generic Holdout, for , is as follows. Intuitively, we want to say that a scientist who follows a strategy that eventually proposes a valid hypothesis, will discover this hypothesis while using the Generic Holdout. More formally, for a hypothesis , distribution and some associated test we define . For as in Definition 3.1, and some distribution , we have

Proof of Theorem 3.1.

Observe that, as are assumed to be independent from , and the internal randomness of the scientist. Let us, for now, assume that the selection of the -th hypothesis depends only on the results of all previous tests in a deterministic way.

Note that for a fixed sequence as above (i.e. we assume that if there are at least ones among ), there is at most hypotheses that will ever be tested by this algorithm — this is a bound on the total range of all those functions. Consider the set given by union of all the ranges of . We know that , and moreover if we fix , we have

For general case, where is a randomized function that depends also on the exploration set , we can use the linearity of expectation — conditioning on any deterministic realization of , and the value of exploration set

, the statement is true by the argument above, and therefore it is true, in expectation over those random variables. ∎

3.2 Example: Gapped Empirical Losses

In many natural situations, the hypothesis test takes a special form: thresholding an empirical loss evaluated on the sample at hand. Our general framework specializes to this case, and here we can give quantitative bounds on the number of samples required to bound false-positive rate.

Specifically, suppose that with each hypothesis

we have some associated loss function

, such that

moreover, suppose the hypothesis test is simply


In this case, we give quantitative bounds on the number of samples required for constant statistical confidence on validated hypotheses within the Generic Holdout framework. If the scientist makes adaptive hypothesis test queries (generated as in Definition 3.1) on the holdout set, including at most that are confirmed to be valid, where each hypothesis test is of form (1) then using holdout set of size is sufficient to guarantee that the probability of confirming a false hypothesis is at most .

One concrete realization of this class of hypothesis tests is following. Consider the class of multivariate normal distributions with covariance matrix bounded in spectral norm by

, and the problem of finding a linear predictor that is correlated with target feature . With each vector of unit norm, we can consider associated loss function , where

. Hypotheses of this form can be generated by using linear regression on the exploration set, and then verified on the holdout set. Theorem 

3.2 states that we can validate exponentially many hypotheses (with respect to the size of given holdout set), as long as we stop upon discovering few true hypotheses of this form.

4 Acknowledgements

We would like to thank Madhu Sudan, Boaz Barak, Lucas Janson, Jonathan Shi, and Thibaut Horel for helpful discussions during the course of this work.


  • [1] Rina Foygel Barber, Emmanuel J Candès, et al. Controlling the false discovery rate via knockoffs. The Annals of Statistics, 43(5):2055–2085, 2015.
  • [2] Deanna M Barch and Tal Yarkoni. Introduction to the special issue on reliability and replication in cognitive and affective neuroscience research, 2013.
  • [3] Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. Algorithmic stability for adaptive data analysis. In

    Proceedings of the forty-eighth annual ACM symposium on Theory of Computing

    , pages 1046–1059. ACM, 2016.
  • [4] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the royal statistical society. Series B (Methodological), pages 289–300, 1995.
  • [5] Colin F Camerer, Anna Dreber, Eskil Forsell, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Johan Almenberg, Adam Altmejd, Taizan Chan, et al. Evaluating replicability of laboratory experiments in economics. Science, 351(6280):1433–1436, 2016.
  • [6] Colin F Camerer, Anna Dreber, Felix Holzmeister, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Gideon Nave, Brian A Nosek, Thomas Pfeiffer, et al. Evaluating the replicability of social science experiments in nature and science between 2010 and 2015. Nature Human Behaviour, page 1, 2018.
  • [7] C. Chambers, M. Munafo, et al. Trust in science would be improved by study pre-registration. Guardian US, June 2013.
  • [8] Open Science Collaboration et al. Estimating the reproducibility of psychological science. Science, 349(6251):aac4716, 2015.
  • [9] Wellcome Trust Case Control Consortium et al. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature, 447(7145):661, 2007.
  • [10] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toni Pitassi, Omer Reingold, and Aaron Roth. Generalization in adaptive data analysis and holdout reuse. In Advances in Neural Information Processing Systems, pages 2350–2358, 2015.
  • [11] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. The reusable holdout: Preserving validity in adaptive data analysis. Science, 349(6248):636–638, 2015.
  • [12] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 117–126. ACM, 2015.
  • [13] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pages 265–284. Springer, 2006.
  • [14] William Fithian, Dennis Sun, and Jonathan Taylor. Optimal inference after model selection. arXiv preprint arXiv:1410.2597, 2014.
  • [15] William Fithian, Jonathan Taylor, Robert Tibshirani, and Ryan Tibshirani. Selective sequential model selection. arXiv preprint arXiv:1512.02565, 2015.
  • [16] David A Freedman and David A Freedman. A note on screening regression equations. the american statistician, 37(2):152–155, 1983.
  • [17] Andrew Gelman and Eric Loken. The statistical crisis in science. American scientist, 102(6):460, 2014.
  • [18] Moritz Hardt and Jonathan Ullman. Preventing false discovery in interactive data analysis is hard. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 454–463. IEEE, 2014.
  • [19] Lihua Lei and William Fithian. Adapt: an interactive procedure for multiple testing with side information. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(4):649–679, 2018.
  • [20] Ang Li and Rina Foygel Barber. Multiple testing with the structure adaptive benjamini-hochberg algorithm. arXiv preprint arXiv:1606.07926, 2016.
  • [21] Ang Li and Rina Foygel Barber. Accumulation tests for fdr control in ordered hypothesis testing. Journal of the American Statistical Association, 112(518):837–849, 2017.
  • [22] Joseph P Simmons, Leif D Nelson, and Uri Simonsohn. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11):1359–1366, 2011.
  • [23] Danielle Welter, Jacqueline MacArthur, Joannella Morales, Tony Burdett, Peggy Hall, Heather Junkins, Alan Klemm, Paul Flicek, Teri Manolio, Lucia Hindorff, et al. The nhgri gwas catalog, a curated resource of snp-trait associations. Nucleic acids research, 42(D1):D1001–D1006, 2013.