Understanding Uncertainty Sampling

07/06/2023
by   Shang Liu, et al.
0

Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples which the current prediction model is uncertain about. However, the usage of uncertainty sampling has been largely heuristic: (i) There is no consensus on the proper definition of "uncertainty" for a specific task under a specific loss; (ii) There is no theoretical guarantee that prescribes a standard protocol to implement the algorithm, for example, how to handle the sequentially arrived annotated data under the framework of optimization algorithms such as stochastic gradient descent. In this work, we systematically examine uncertainty sampling algorithms under both stream-based and pool-based active learning. We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function and establish that an uncertainty sampling algorithm essentially optimizes against such an equivalent loss. The perspective verifies the properness of existing uncertainty measures from two aspects: surrogate property and loss convexity. Furthermore, we propose a new notion for designing uncertainty measures called loss as uncertainty. The idea is to use the conditional expected loss given the features as the uncertainty measure. Such an uncertainty measure has nice analytical properties and generality to cover both classification and regression problems, which enable us to provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings, in the full generality of the underlying model and problem. Lastly, we establish connections between certain variants of the uncertainty sampling algorithms with risk-sensitive objectives and distributional robustness, which can partly explain the advantage of uncertainty sampling algorithms when the sample size is small.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss

Uncertainty sampling, a popular active learning algorithm, is used to re...
research
08/31/2019

Epistemic Uncertainty Sampling

Various strategies for active learning have been proposed in the machine...
research
05/11/2023

Active Learning in the Predict-then-Optimize Framework: A Margin-Based Approach

We develop the first active learning method in the predict-then-optimize...
research
07/30/2021

When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

Active learning aims to select samples to be annotated that yield the la...
research
10/27/2022

Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active Learning

We show that uncertainty sampling is sufficient to achieve exploration v...
research
11/08/2011

UPAL: Unbiased Pool Based Active Learning

In this paper we address the problem of pool based active learning, and ...
research
02/02/2016

Interactive algorithms: from pool to stream

We consider interactive algorithms in the pool-based setting, and in the...

Please sign up or login with your details

Forgot password? Click here to reset