DeepAI AI Chat
Log In Sign Up

Single Shot Active Learning using Pseudo Annotators

by   Yazhou Yang, et al.

Standard myopic active learning assumes that human annotations are always obtainable whenever new samples are selected. This, however, is unrealistic in many real-world applications where human experts are not readily available at all times. In this paper, we consider the single shot setting: all the required samples should be chosen in a single shot and no human annotation can be exploited during the selection process. We propose a new method, Active Learning through Random Labeling (ALRL), which substitutes single human annotator for multiple, what we will refer to as, pseudo annotators. These pseudo annotators always provide uniform and random labels whenever new unlabeled samples are queried. This random labeling enables standard active learning algorithms to also exhibit the exploratory behavior needed for single shot active learning. The exploratory behavior is further enhanced by selecting the most representative sample via minimizing nearest neighbor distance between unlabeled samples and queried samples. Experiments on real-world datasets demonstrate that the proposed method outperforms several state-of-the-art approaches.


page 1

page 2

page 3

page 4


SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios

Active learning has proven to be useful for minimizing labeling costs by...

Exploiting Counter-Examples for Active Learning with Partial labels

This paper studies a new problem, active learning with partial labels (A...

Active Learning Using Uncertainty Information

Many active learning methods belong to the retraining-based approaches, ...

Active Few-Shot Classification: a New Paradigm for Data-Scarce Learning Settings

We consider a novel formulation of the problem of Active Few-Shot Classi...

Batch Active Learning Using Determinantal Point Processes

Data collection and labeling is one of the main challenges in employing ...

Online Active Learning with Dynamic Marginal Gain Thresholding

The blessing of ubiquitous data also comes with a curse: the communicati...

The Practical Challenges of Active Learning: Lessons Learned from Live Experimentation

We tested in a live setting the use of active learning for selecting tex...