Minimax Active Learning

by   Sayna Ebrahimi, et al.

Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator. Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples. While uncertainty-based strategies are susceptible to outliers, solely relying on sample diversity does not capture the information available on the main task. In this work, we develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner. Our model consists of an entropy minimizing feature encoding network followed by an entropy maximizing classification layer. This minimax formulation reduces the distribution gap between the labeled/unlabeled data, while a discriminator is simultaneously trained to distinguish the labeled/unlabeled data. The highest entropy samples from the classifier that the discriminator predicts as unlabeled are selected for labeling. We extensively evaluate our method on various image classification and semantic segmentation benchmark datasets and show superior performance over the state-of-the-art methods.



There are no comments yet.


page 2

page 6


State-Relabeling Adversarial Active Learning

Active learning is to design label-efficient algorithms by sampling the ...

When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

Active learning aims to select samples to be annotated that yield the la...

Exploring Representativeness and Informativeness for Active Learning

How can we find a general way to choose the most suitable samples for tr...

Boosting Active Learning via Improving Test Performance

Central to active learning (AL) is what data should be selected for anno...

Variational Adversarial Active Learning

Active learning aims to develop label-efficient algorithms by sampling t...

Incorporating Unlabeled Data into Distributionally Robust Learning

We study a robust alternative to empirical risk minimization called dist...

Cost-Based Budget Active Learning for Deep Learning

Majorly classical Active Learning (AL) approach usually uses statistical...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.