Improving greedy core-set configurations for active learning with uncertainty-scaled distances

02/09/2022
by   Yuchen Li, et al.
1

We scale perceived distances of the core-set algorithm by a factor of uncertainty and search for low-confidence configurations, finding significant improvements in sample efficiency across CIFAR10/100 and SVHN image classification, especially in larger acquisition sizes. We show the necessity of our modifications and explain how the improvement is due to a probabilistic quadratic speed-up in the convergence of core-set loss, under assumptions about the relationship of model uncertainty and misclassification.

READ FULL TEXT

page 4

page 8

page 13

research
05/06/2021

Bayesian Active Learning by Disagreements: A Geometric Perspective

We present geometric Bayesian active learning by disagreements (GBALD), ...
research
03/30/2021

Is segmentation uncertainty useful?

Probabilistic image segmentation encodes varying prediction confidence a...
research
06/15/2018

On the Relationship between Data Efficiency and Error for Uncertainty Sampling

While active learning offers potential cost savings, the actual data eff...
research
10/27/2022

Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active Learning

We show that uncertainty sampling is sufficient to achieve exploration v...
research
04/15/2003

On reconstructing n-point configurations from the distribution of distances or areas

One way to characterize configurations of points up to congruence is by ...
research
10/09/2020

Uncertainty-Aware Few-Shot Image Classification

Few-shot image classification aims to learn to recognize new categories ...

Please sign up or login with your details

Forgot password? Click here to reset