UPAL: Unbiased Pool Based Active Learning

11/08/2011
by   Ravi Ganti, et al.
0

In this paper we address the problem of pool based active learning, and provide an algorithm, called UPAL, that works by minimizing the unbiased estimator of the risk of a hypothesis in a given hypothesis space. For the space of linear classifiers and the squared loss we show that UPAL is equivalent to an exponentially weighted average forecaster. Exploiting some recent results regarding the spectra of random matrices allows us to establish consistency of UPAL when the true hypothesis is a linear hypothesis. Empirical comparison with an active learner implementation in Vowpal Wabbit, and a previously proposed pool based active learner implementation show good empirical performance and better scalability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2021

Exponential Savings in Agnostic Active Learning through Abstention

We show that in pool-based active classification without assumptions on ...
research
10/15/2022

Active Learning from the Web

Labeling data is one of the most costly processes in machine learning pi...
research
03/16/2011

A note on active learning for smooth problems

We show that the disagreement coefficient of certain smooth hypothesis c...
research
11/01/2019

Picking groups instead of samples: A close look at Static Pool-based Meta-Active Learning

Active Learning techniques are used to tackle learning problems where ob...
research
06/30/2022

When an Active Learner Meets a Black-box Teacher

Active learning maximizes the hypothesis updates to find those desired u...
research
03/28/2015

Active Model Aggregation via Stochastic Mirror Descent

We consider the problem of learning convex aggregation of models, that i...
research
07/06/2023

Understanding Uncertainty Sampling

Uncertainty sampling is a prevalent active learning algorithm that queri...

Please sign up or login with your details

Forgot password? Click here to reset