Lower bounds in multiple testing: A framework based on derandomized proxies

by   Max Rabinovich, et al.

The large bulk of work in multiple testing has focused on specifying procedures that control the false discovery rate (FDR), with relatively less attention being paid to the corresponding Type II error known as the false non-discovery rate (FNR). A line of more recent work in multiple testing has begun to investigate the tradeoffs between the FDR and FNR and to provide lower bounds on the performance of procedures that depend on the model structure. Lacking thus far, however, has been a general approach to obtaining lower bounds for a broad class of models. This paper introduces an analysis strategy based on derandomization, illustrated by applications to various concrete models. Our main result is meta-theorem that gives a general recipe for obtaining lower bounds on the combination of FDR and FNR. We illustrate this meta-theorem by deriving explicit bounds for several models, including instances with dependence, scale-transformed alternatives, and non-Gaussian-like distributions. We provide numerical simulations of some of these lower bounds, and show a close relation to the actual performance of the Benjamini-Hochberg (BH) algorithm.


page 1

page 2

page 3

page 4


Local lower bounds on characteristics of quantum and classical systems

We consider methods of obtaining local lower bounds on characteristics o...

Ziv-Zakai-type error bounds for general statistical models

I propose Ziv-Zakai-type lower bounds on the Bayesian error for estimati...

Lower Bounds for Maximum Weighted Cut

While there have been many results on lower bounds for Max Cut in unweig...

Alternation-Trading Proofs, Linear Programming, and Lower Bounds

A fertile area of recent research has demonstrated concrete polynomial t...

A Bandit Approach to Multiple Testing with False Discovery Control

We propose an adaptive sampling approach for multiple testing which aims...

Theoretical bounds on estimation error for meta-learning

Machine learning models have traditionally been developed under the assu...

Please sign up or login with your details

Forgot password? Click here to reset