Active Learning with Importance Sampling

10/10/2019
by   Muni Sreenivas Pydi, et al.
0

We consider an active learning setting where the algorithm has access to a large pool of unlabeled data and a small pool of labeled data. In each iteration, the algorithm chooses few unlabeled data points and obtains their labels from an oracle. In this paper, we consider a probabilistic querying procedure to choose the points to be labeled. We propose an algorithm for Active Learning with Importance Sampling (ALIS), and derive upper bounds on the true loss incurred by the algorithm for any arbitrary probabilistic sampling procedure. Further, we propose an optimal sampling distribution that minimizes the upper bound on the true loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2021

One-Round Active Learning

Active learning has been a main solution for reducing data labeling cost...
research
06/08/2021

Coresets for Classification – Simplified and Strengthened

We give relative error coresets for training linear classifiers with a b...
research
03/25/2023

Deep Active Learning with Contrastive Learning Under Realistic Data Pool Assumptions

Active learning aims to identify the most informative data from an unlab...
research
07/14/2021

Zero-Round Active Learning

Active learning (AL) aims at reducing labeling effort by identifying the...
research
07/16/2012

Surrogate Losses in Passive and Active Learning

Active learning is a type of sequential design for supervised machine le...
research
06/05/2021

Low Budget Active Learning via Wasserstein Distance: An Integer Programming Approach

Given restrictions on the availability of data, active learning is the p...
research
05/19/2021

L1 Regression with Lewis Weights Subsampling

We consider the problem of finding an approximate solution to ℓ_1 regres...

Please sign up or login with your details

Forgot password? Click here to reset