DeepAI AI Chat
Log In Sign Up

Stopping Criterion for Active Learning Based on Error Stability

by   Hideaki Ishibashi, et al.

Active learning is a framework for supervised learning to improve the predictive performance by adaptively annotating a small number of samples. To realize efficient active learning, both an acquisition function that determines the next datum and a stopping criterion that determines when to stop learning should be considered. In this study, we propose a stopping criterion based on error stability, which guarantees that the change in generalization error upon adding a new sample is bounded by the annotation cost and can be applied to any Bayesian active learning. We demonstrate that the proposed criterion stops active learning at the appropriate timing for various learning models and real datasets.


Stopping criterion for active learning based on deterministic generalization bounds

Active learning is a framework in which the learning machine can select ...

Prediction stability as a criterion in active learning

Recent breakthroughs made by deep learning rely heavily on large number ...

Hitting the Target: Stopping Active Learning at the Cost-Based Optimum

Active learning allows machine learning models to be trained using fewer...

Analysis of Stopping Active Learning based on Stabilizing Predictions

Within the natural language processing (NLP) community, active learning ...

Deciding when to stop: Efficient stopping of active learning guided drug-target prediction

Active learning has shown to reduce the number of experiments needed to ...

Identifying Wrongly Predicted Samples: A Method for Active Learning

State-of-the-art machine learning models require access to significant a...

Responsible Active Learning via Human-in-the-loop Peer Study

Active learning has been proposed to reduce data annotation efforts by o...

Code Repositories