ASPEST: Bridging the Gap Between Active Learning and Selective Prediction

04/07/2023
by   Jiefeng Chen, et al.
2

Selective prediction aims to learn a reliable model that abstains from making predictions when the model uncertainty is high. These predictions can then be deferred to a human expert for further evaluation. In many real-world scenarios, however, the distribution of test data is different from the training data. This results in more inaccurate predictions, necessitating increased human labeling, which is difficult and expensive in many scenarios. Active learning circumvents this difficulty by only querying the most informative examples and, in several cases, has been shown to lower the overall labeling effort. In this work, we bridge the gap between selective prediction and active learning, proposing a new learning paradigm called active selective prediction which learns to query more informative samples from the shifted target domain while increasing accuracy and coverage. For this new problem, we propose a simple but effective solution, ASPEST, that trains ensembles of model snapshots using self-training with their aggregated outputs as pseudo labels. Extensive experiments on several image, text and structured datasets with domain shifts demonstrate that active selective prediction can significantly outperform prior work on selective prediction and active learning (e.g. on the MNIST→SVHN benchmark with the labeling budget of 100, ASPEST improves the AUC metric from 79.36 humans in the loop.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2021

Reducing Label Effort: Self-Supervised meets Active Learning

Active learning is a paradigm aimed at reducing the annotation effort by...
research
11/16/2021

Selective Ensembles for Consistent Predictions

Recent work has shown that models trained to the same objective, and whi...
research
01/11/2023

Combining Self-labeling with Selective Sampling

Since data is the fuel that drives machine learning models, and access t...
research
09/16/2022

On the Relation between Sensitivity and Accuracy in In-context Learning

In-context learning (ICL) suffers from oversensitivity to the prompt, wh...
research
06/24/2023

Are Good Explainers Secretly Human-in-the-Loop Active Learners?

Explainable AI (XAI) techniques have become popular for multiple use-cas...
research
09/23/2021

Bridging the Last Mile in Sim-to-Real Robot Perception via Bayesian Active Learning

Learning from synthetic data is popular in a variety of robotic vision t...
research
06/18/2019

Deep Active Learning for Anchor User Prediction

Predicting pairs of anchor users plays an important role in the cross-ne...

Please sign up or login with your details

Forgot password? Click here to reset