DeepAI
Log In Sign Up

Dynamic Algorithms for Online Multiple Testing

10/26/2020
by   Ziyu Xu, et al.
0

We demonstrate new algorithms for online multiple testing that provably control false discovery exceedance (FDX) while achieving orders of magnitude more power than previous methods. This statistical advance is enabled by the development of new algorithmic ideas: earlier algorithms are more "static" while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated. We also prove relationships between controlling FDR, FDX, and other error metrics for our new algorithm, SupLORD, and how controlling one metric can simultaneously control all error metrics. We demonstrate that our algorithms achieve higher power in a variety of synthetic experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/11/2019

The Power of Batching in Multiple Hypothesis Testing

One important partition of algorithms for controlling the false discover...
01/15/2019

Only Closed Testing Procedures are Admissible for Controlling False Discovery Proportions

We consider the class of all multiple testing methods controlling tail p...
02/07/2019

Contextual Online False Discovery Rate Control

Multiple hypothesis testing, a situation when we wish to consider many h...
06/16/2017

A framework for Multi-A(rmed)/B(andit) testing with online FDR control

We propose an alternative framework to existing setups for controlling f...
10/14/2021

Treatment Effect Detection with Controlled FDR under Dependence for Large-Scale Experiments

Online controlled experiments (also known as A/B Testing) have been view...
06/06/2020

Learning Memory-Efficient Stable Linear Dynamical Systems for Prediction and Control

Learning a stable Linear Dynamical System (LDS) from data involves creat...
10/22/2020

Drift Detection in Episodic Data: Detect When Your Agent Starts Faltering

Detection of deterioration of agent performance in dynamic environments ...