Minimax Active Learning

12/18/2020
by   Sayna Ebrahimi, et al.
0

Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator. Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples. While uncertainty-based strategies are susceptible to outliers, solely relying on sample diversity does not capture the information available on the main task. In this work, we develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner. Our model consists of an entropy minimizing feature encoding network followed by an entropy maximizing classification layer. This minimax formulation reduces the distribution gap between the labeled/unlabeled data, while a discriminator is simultaneously trained to distinguish the labeled/unlabeled data. The highest entropy samples from the classifier that the discriminator predicts as unlabeled are selected for labeling. We extensively evaluate our method on various image classification and semantic segmentation benchmark datasets and show superior performance over the state-of-the-art methods.

READ FULL TEXT

page 2

page 6

research
04/10/2020

State-Relabeling Adversarial Active Learning

Active learning is to design label-efficient algorithms by sampling the ...
research
04/14/2019

Exploring Representativeness and Informativeness for Active Learning

How can we find a general way to choose the most suitable samples for tr...
research
12/10/2021

Boosting Active Learning via Improving Test Performance

Central to active learning (AL) is what data should be selected for anno...
research
03/31/2019

Variational Adversarial Active Learning

Active learning aims to develop label-efficient algorithms by sampling t...
research
07/30/2021

When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

Active learning aims to select samples to be annotated that yield the la...
research
12/16/2019

Incorporating Unlabeled Data into Distributionally Robust Learning

We study a robust alternative to empirical risk minimization called dist...
research
12/09/2020

Cost-Based Budget Active Learning for Deep Learning

Majorly classical Active Learning (AL) approach usually uses statistical...

Please sign up or login with your details

Forgot password? Click here to reset