Not All are Made Equal: Consistency of Weighted Averaging Estimators Under Active Learning

10/11/2019
by   Jack Goetz, et al.
19

Active learning seeks to build the best possible model with a budget of labelled data by sequentially selecting the next point to label. However the training set is no longer iid, violating the conditions required by existing consistency results. Inspired by the success of Stone's Theorem we aim to regain consistency for weighted averaging estimators under active learning. Based on ideas in <cit.>, our approach is to enforce a small amount of random sampling by running an augmented version of the underlying active learning algorithm. We generalize Stone's Theorem in the noise free setting, proving consistency for well known classifiers such as k-NN, histogram and kernel estimators under conditions which mirror classical results. However in the presence of noise we can no longer deal with these estimators in a unified manner; for some satisfying this condition also guarantees sufficiency in the noisy case, while for others we can achieve near perfect inconsistency while this condition holds. Finally we provide conditions for consistency in the presence of noise, which give insight into why these estimators can behave so differently under the combination of noise and active learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2021

Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise

We develop a computationally-efficient PAC active learning algorithm for...
research
10/15/2022

Active Learning with Neural Networks: Insights from Nonparametric Statistics

Deep neural networks have great representation power, but typically requ...
research
11/19/2022

A Two-Stage Active Learning Algorithm for k-Nearest Neighbors

We introduce a simple and intuitive two-stage active learning algorithm ...
research
03/17/2018

Structural query-by-committee

In this work, we describe a framework that unifies many different intera...
research
10/30/2016

Active Learning from Imperfect Labelers

We study active learning where the labeler can not only return incorrect...
research
06/20/2014

Noise-adaptive Margin-based Active Learning and Lower Bounds under Tsybakov Noise Condition

We present a simple noise-robust margin-based active learning algorithm ...

Please sign up or login with your details

Forgot password? Click here to reset