Near-Optimal Bayesian Active Learning with Noisy Observations

10/15/2010
by   Daniel Golovin, et al.
0

We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally. We show that if the observations are noisy, perhaps surprisingly, GBS can perform very poorly. We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing returns property called adaptive submodularity, generalizing the classical notion of submodular set functions to adaptive policies. Our results hold even if the tests have non-uniform cost and their noise is correlated. We also propose EffECXtive, a particularly fast approximation of EC2, and evaluate it on a Bayesian experimental design problem involving human subjects, intended to tease apart competing economic theories of how people make decisions under uncertainty.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2010

Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization

Solving stochastic optimization problems under partial observability, wh...
research
02/24/2014

Near Optimal Bayesian Active Learning for Decision Making

How should we gather information to make effective decisions? We address...
research
06/04/2019

Bayesian Active Learning With Abstention Feedbacks

We study pool-based active learning with abstention feedbacks where a la...
research
05/23/2017

Bayesian Pool-based Active Learning With Abstention Feedbacks

We study pool-based active learning with abstention feedbacks, where a l...
research
01/24/2022

Active Learning Polynomial Threshold Functions

We initiate the study of active learning polynomial threshold functions ...
research
04/16/2019

Introducing Bayesian Analysis with m&m's^: an active-learning exercise for undergraduates

We present an active-learning strategy for undergraduates that applies B...
research
09/28/2018

Target-Independent Active Learning via Distribution-Splitting

To reduce the label complexity in Agnostic Active Learning (A^2 algorith...

Please sign up or login with your details

Forgot password? Click here to reset