DeepAI AI Chat
Log In Sign Up

Prediction stability as a criterion in active learning

by   Junyu Liu, et al.

Recent breakthroughs made by deep learning rely heavily on large number of annotated samples. To overcome this shortcoming, active learning is a possible solution. Beside the previous active learning algorithms that only adopted information after training, we propose a new class of method based on the information during training, named sequential-based method. An specific criterion of active learning called prediction stability is proposed to prove the feasibility of sequential-based methods. Experiments are made on CIFAR-10 and CIFAR-100, and the results indicates that prediction stability is effective and works well on fewer-labeled datasets. Prediction stability reaches the accuracy of traditional acquisition functions like entropy on CIFAR-10, and notably outperforms them on CIFAR-100.


Stopping Criterion for Active Learning Based on Error Stability

Active learning is a framework for supervised learning to improve the pr...

Adversarial Representation Active Learning

Active learning aims to develop label-efficient algorithms by querying t...

Entropy-based Stability-Plasticity for Lifelong Learning

The ability to continuously learn remains elusive for deep learning mode...

Cluster-Based Active Learning

In this work, we introduce Cluster-Based Active Learning, a novel framew...

ST-CoNAL: Consistency-Based Acquisition Criterion Using Temporal Self-Ensemble for Active Learning

Modern deep learning has achieved great success in various fields. Howev...

A critical look at the current train/test split in machine learning

The randomized or cross-validated split of training and testing sets has...

BALanCe: Deep Bayesian Active Learning via Equivalence Class Annealing

Active learning has demonstrated data efficiency in many fields. Existin...