Active Regression by Stratification

10/22/2014
by   Sivan Sabato, et al.
0

We propose a new active learning algorithm for parametric linear regression with random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting that provably can improve over passive learning. Unlike other learning settings (such as classification), in regression the passive learning rate of O(1/ϵ) cannot in general be improved upon. Nonetheless, the so-called `constant' in the rate of convergence, which is characterized by a distribution-dependent risk, can be improved in many cases. For a given distribution, achieving the optimal risk requires prior knowledge of the distribution. Following the stratification technique advocated in Monte-Carlo function integration, our active learner approaches the optimal risk using piecewise constant approximations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2011

Activized Learning: Transforming Passive to Active with Improved Label Complexity

We study the theoretical advantages of active learning over passive lear...
research
01/17/2020

K-NN active learning under local smoothness assumption

There is a large body of work on convergence rates either in passive or ...
research
10/23/2014

Attribute Efficient Linear Regression with Data-Dependent Sampling

In this paper we analyze a budgeted learning setting, in which the learn...
research
10/16/2021

Nuances in Margin Conditions Determine Gains in Active Learning

We consider nonparametric classification with smooth regression function...
research
02/14/2021

Reflecting stochastic dynamics of active-passive crowds in a queueing theory model

Stochastic differential equation (SDE) models have been extensively used...
research
11/19/2013

Beating the Minimax Rate of Active Learning with Prior Knowledge

Active learning refers to the learning protocol where the learner is all...
research
05/17/2022

Classification as Direction Recovery: Improved Guarantees via Scale Invariance

Modern algorithms for binary classification rely on an intermediate regr...

Please sign up or login with your details

Forgot password? Click here to reset