Training-Free Neural Active Learning with Initialization-Robustness Guarantees

06/07/2023
by   Apivich Hemachandra, et al.
0

Existing neural active learning algorithms have aimed to optimize the predictive performance of neural networks (NNs) by selecting data for labelling. However, other than a good predictive performance, being robust against random parameter initializations is also a crucial requirement in safety-critical applications. To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness. Importantly, our EV-GP criterion is training-free, i.e., it does not require any training of the NN during data selection, which makes it computationally efficient. We empirically demonstrate that our EV-GP criterion is highly correlated with both initialization robustness and generalization performance, and show that it consistently outperforms baseline methods in terms of both desiderata, especially in situations with limited initial data or large batch sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2021

Stopping Criterion for Active Learning Based on Error Stability

Active learning is a framework for supervised learning to improve the pr...
research
05/23/2022

Bayesian Active Learning with Fully Bayesian Gaussian Processes

The bias-variance trade-off is a well-known problem in machine learning ...
research
11/21/2015

Near-Optimal Active Learning of Multi-Output Gaussian Processes

This paper addresses the problem of active learning of a multi-output Ga...
research
11/18/2022

Active Learning with Convolutional Gaussian Neural Processes for Environmental Sensor Placement

Deploying environmental measurement stations can be a costly and time-co...
research
02/27/2019

Deeper Connections between Neural Networks and Gaussian Processes Speed-up Active Learning

Active learning methods for neural networks are usually based on greedy ...
research
04/29/2021

Selecting the Points for Training using Graph Centrality

We describe a method to select the nodes in Graph datasets for training ...
research
09/17/2012

Submodularity in Batch Active Learning and Survey Problems on Gaussian Random Fields

Many real-world datasets can be represented in the form of a graph whose...

Please sign up or login with your details

Forgot password? Click here to reset