Active Learning at the ImageNet Scale

by   Zeyad Ali Sami Emam, et al.
University of Maryland
NYU college
National Institutes of Health

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset. AL is especially impactful in industrial scale settings where data labeling costs are high and practitioners use every tool at their disposal to improve model performance. The recent success of self-supervised pretraining (SSP) highlights the importance of harnessing abundant unlabeled data to boost model performance. By combining AL with SSP, we can make use of unlabeled data while simultaneously labeling and training on particularly informative samples. In this work, we study a combination of AL and SSP on ImageNet. We find that performance on small toy datasets – the typical benchmark setting in the literature – is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner. Among the existing baselines we test, popular AL algorithms across a variety of small and large scale settings fail to outperform random sampling. To remedy the class-imbalance problem, we propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently by selecting more balanced samples for annotation than existing methods. Our code is available at: .


A Simple Baseline for Low-Budget Active Learning

Active learning focuses on choosing a subset of unlabeled data to be lab...

TiDAL: Learning Training Dynamics for Active Learning

Active learning (AL) aims to select the most useful data samples from an...

Active Finetuning: Exploiting Annotation Budget in the Pretraining-Finetuning Paradigm

Given the large-scale data and the high annotation cost, pretraining-fin...

On the Limitations of Simulating Active Learning

Active learning (AL) is a human-and-model-in-the-loop paradigm that iter...

Towards Understanding the Optimal Behaviors of Deep Active Learning Algorithms

Active learning (AL) algorithms may achieve better performance with fewe...

Towards Robust and Reproducible Active Learning Using Neural Networks

Active learning (AL) is a promising ML paradigm that has the potential t...

REAL: A Representative Error-Driven Approach for Active Learning

Given a limited labeling budget, active learning (AL) aims to sample the...

Please sign up or login with your details

Forgot password? Click here to reset