Active Local Learning

08/31/2020
by   Arturs Backurs, et al.
9

In this work we consider active local learning: given a query point x, and active access to an unlabeled training set S, output the prediction h(x) of a near-optimal h ∈ H using significantly fewer labels than would be needed to actually learn h fully. In particular, the number of label queries should be independent of the complexity of H, and the function h should be well-defined, independent of x. This immediately also implies an algorithm for distance estimation: estimating the value opt(H) from many fewer labels than needed to actually learn a near-optimal h ∈ H, by running local learning on a few random query points and computing the average error. For the hypothesis class consisting of functions supported on the interval [0,1] with Lipschitz constant bounded by L, we present an algorithm that makes O((1 / ϵ^6) log(1/ϵ)) label queries from an unlabeled pool of O((L / ϵ^4)log(1/ϵ)) samples. It estimates the distance to the best hypothesis in the class to an additive error of ϵ for an arbitrary underlying distribution. We further generalize our algorithm to more than one dimensions. We emphasize that the number of labels used is independent of the complexity of the hypothesis class which depends on L. Furthermore, we give an algorithm to locally estimate the values of a near-optimal function at a few query points of interest with number of labels independent of L. We also consider the related problem of approximating the minimum error that can be achieved by the Nadaraya-Watson estimator under a linear diagonal transformation with eigenvalues coming from a small range. For a d-dimensional pointset of size N, our algorithm achieves an additive approximation of ϵ, makes Õ(d/ϵ^2) queries and runs in Õ(d^2/ϵ^d+4+dN/ϵ^2) time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2017

Active Tolerant Testing

In this work, we give the first algorithms for tolerant testing of nontr...
research
02/18/2017

Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces

It has been a long-standing problem to efficiently learn a halfspace usi...
research
09/08/2019

Distribution-Free Testing of Linear Functions on R^n

We study the problem of testing whether a function f:R^n->R is linear (i...
research
05/31/2021

Locally Private k-Means Clustering with Constant Multiplicative Approximation and Near-Optimal Additive Error

Given a data set of size n in d'-dimensional Euclidean space, the k-mean...
research
11/21/2022

Estimating the Effective Support Size in Constant Query Complexity

Estimating the support size of a distribution is a well-studied problem ...
research
03/11/2016

Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting

In this paper, we consider the problem of actively learning a linear cla...
research
09/28/2018

Target-Independent Active Learning via Distribution-Splitting

To reduce the label complexity in Agnostic Active Learning (A^2 algorith...

Please sign up or login with your details

Forgot password? Click here to reset