The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science

09/14/2018
by   Preetum Nakkiran, et al.
0

Adaptive data analysis has posed a challenge to science due to its ability to generate false hypotheses on moderately large data sets. In general, with non-adaptive data analyses (where queries to the data are generated without being influenced by answers to previous queries) a data set containing n samples may support exponentially many queries in n. This number reduces to linearly many under naive adaptive data analysis, and even sophisticated remedies such as the Reusable Holdout (Dwork et. al 2015) only allow quadratically many queries in n. In this work, we propose a new framework for adaptive science which exponentially improves on this number of queries under a restricted yet scientifically relevant setting, where the goal of the scientist is to find a single (or a few) true hypotheses about the universe based on the samples. Such a setting may describe the search for predictive factors of some disease based on medical data, where the analyst may wish to try a number of predictive models until a satisfactory one is found. Our solution, the Generic Holdout, involves two simple ingredients: (1) a partitioning of the data into a exploration set and a holdout set and (2) a limited exposure strategy for the holdout set. An analyst is free to use the exploration set arbitrarily, but when testing hypotheses against the holdout set, the analyst only learns the answer to the question: "Is the given hypothesis true (empirically) on the holdout set?" -- and no more information, such as "how well" the hypothesis fits the holdout set. The resulting scheme is immediate to analyze, but despite its simplicity we do not believe our method is obvious, as evidenced by the many violations in practice. Our proposal can be seen as an alternative to pre-registration, and allows researchers to get the benefits of adaptive data analysis without the problems of adaptivity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2019

Evaluating the Success of a Data Analysis

A fundamental problem in the practice and teaching of data science is ho...
research
11/16/2015

How much does your data exploration overfit? Controlling bias via information usage

Modern data is messy and high-dimensional, and it is often not clear a p...
research
02/17/2023

Subsampling Suffices for Adaptive Data Analysis

Ensuring that analyses performed on a dataset are representative of the ...
research
04/08/2016

Challenges in Bayesian Adaptive Data Analysis

Traditional statistical analysis requires that the analysis process and ...
research
03/12/2018

The Everlasting Database: Statistical Validity at a Fair Price

The problem of handling adaptivity in data analysis, intentional or not,...
research
02/13/2016

Designing Intelligent Instruments

Remote science operations require automated systems that can both act an...
research
03/09/2022

A continuous multiple hypothesis testing framework for optimal exoplanet detection

The detection of exoplanets is hindered by the presence of complex astro...

Please sign up or login with your details

Forgot password? Click here to reset