Active Learning at the ImageNet Scale

11/25/2021
by   Zeyad Ali Sami Emam, et al.
0

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset. AL is especially impactful in industrial scale settings where data labeling costs are high and practitioners use every tool at their disposal to improve model performance. The recent success of self-supervised pretraining (SSP) highlights the importance of harnessing abundant unlabeled data to boost model performance. By combining AL with SSP, we can make use of unlabeled data while simultaneously labeling and training on particularly informative samples. In this work, we study a combination of AL and SSP on ImageNet. We find that performance on small toy datasets – the typical benchmark setting in the literature – is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner. Among the existing baselines we test, popular AL algorithms across a variety of small and large scale settings fail to outperform random sampling. To remedy the class-imbalance problem, we propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently by selecting more balanced samples for annotation than existing methods. Our code is available at: https://github.com/zeyademam/active_learning .

READ FULL TEXT
research
10/22/2021

A Simple Baseline for Low-Budget Active Learning

Active learning focuses on choosing a subset of unlabeled data to be lab...
research
10/13/2022

TiDAL: Learning Training Dynamics for Active Learning

Active learning (AL) aims to select the most useful data samples from an...
research
03/25/2023

Active Finetuning: Exploiting Annotation Budget in the Pretraining-Finetuning Paradigm

Given the large-scale data and the high annotation cost, pretraining-fin...
research
05/21/2023

On the Limitations of Simulating Active Learning

Active learning (AL) is a human-and-model-in-the-loop paradigm that iter...
research
12/29/2020

Towards Understanding the Optimal Behaviors of Deep Active Learning Algorithms

Active learning (AL) algorithms may achieve better performance with fewe...
research
02/21/2020

Towards Robust and Reproducible Active Learning Using Neural Networks

Active learning (AL) is a promising ML paradigm that has the potential t...
research
07/03/2023

REAL: A Representative Error-Driven Approach for Active Learning

Given a limited labeling budget, active learning (AL) aims to sample the...

Please sign up or login with your details

Forgot password? Click here to reset