Mitigating Sampling Bias and Improving Robustness in Active Learning

09/13/2021
by   Ranganath Krishnan, et al.
0

This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness. We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting. We propose an unbiased query strategy that selects informative data samples of diverse feature representations with our methods: supervised contrastive active learning (SCAL) and deep feature modeling (DFM). We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup with the query computation 26x faster than Bayesian active learning by disagreement and 11x faster than CoreSet. The proposed SCAL method outperforms by a big margin in robustness to dataset shift and out-of-distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2021

Improving Robustness and Efficiency in Active Learning with Contrastive Loss

This paper introduces supervised contrastive active learning (SCAL) by l...
research
05/27/2022

Characterizing the robustness of Bayesian adaptive experimental designs to active learning bias

Bayesian adaptive experimental design is a form of active learning, whic...
research
09/20/2019

Sampling Bias in Deep Active Classification: An Empirical Study

The exploding cost and time needed for data labeling and model training ...
research
10/05/2022

Making Your First Choice: To Address Cold Start Problem in Vision Active Learning

Active learning promises to improve annotation efficiency by iteratively...
research
02/28/2023

Active Learning with Combinatorial Coverage

Active learning is a practical field of machine learning that automates ...
research
07/20/2023

Clinical Trial Active Learning

This paper presents a novel approach to active learning that takes into ...
research
06/19/2023

Taming Small-sample Bias in Low-budget Active Learning

Active learning (AL) aims to minimize the annotation cost by only queryi...

Please sign up or login with your details

Forgot password? Click here to reset