DeepAI AI Chat
Log In Sign Up

A New Perspective on Pool-Based Active Classification and False-Discovery Control

by   Lalit Jain, et al.

In many scientific settings there is a need for adaptive experimental design to guide the process of identifying regions of the search space that contain as many true positives as possible subject to a low rate of false discoveries (i.e. false alarms). Such regions of the search space could differ drastically from a predicted set that minimizes 0/1 error and accurate identification could require very different sampling strategies. Like active learning for binary classification, this experimental design cannot be optimally chosen a priori, but rather the data must be taken sequentially and adaptively. However, unlike classification with 0/1 error, collecting data adaptively to find a set with high true positive rate and low false discovery rate (FDR) is not as well understood. In this paper we provide the first provably sample efficient adaptive algorithm for this problem. Along the way we highlight connections between classification, combinatorial bandits, and FDR control making contributions to each.


page 1

page 2

page 3

page 4


A Bandit Approach to Multiple Testing with False Discovery Control

We propose an adaptive sampling approach for multiple testing which aims...

How to Control the Error Rates of Binary Classifiers

The traditional binary classification framework constructs classifiers w...

On classification of strategic agents who can both game and improve

In this work, we consider classification of agents who can both game and...

Kernel Stein Tests for Multiple Model Comparison

We address the problem of non-parametric multiple model comparison: give...

A Likelihood Ratio Approach for Precise Discovery of Truly Relevant Protein Markers

The process of biomarker discovery is typically lengthy and costly, invo...

Adaptive Signal Inclusion With Genomic Applications

This paper addresses the challenge of efficiently capturing a high propo...