Bayesian Active Learning with Fully Bayesian Gaussian Processes

05/23/2022
by   Christoffer Riis, et al.
0

The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on active learning with Gaussian Processes (GPs). For the GP, the bias-variance trade-off is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first one is a Bayesian variant of Query-by-Committee (B-QBC), and the second is an extension that explicitly minimizes the predictive variance through a Query by Mixture of Gaussian Processes (QB-MGP) formulation. Across six common simulators, we empirically show that B-QBC, on average, achieves the best marginal likelihood, whereas QB-MGP achieves the best predictive performance. We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2022

Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

Gaussian processes (GPs) are the main surrogate functions used for seque...
research
04/21/2023

Self-Correcting Bayesian Optimization through Bayesian Active Learning

Gaussian processes are cemented as the model of choice in Bayesian optim...
research
02/03/2017

Query Efficient Posterior Estimation in Scientific Experiments via Bayesian Active Learning

A common problem in disciplines of applied Statistics research such as A...
research
06/07/2023

Training-Free Neural Active Learning with Initialization-Robustness Guarantees

Existing neural active learning algorithms have aimed to optimize the pr...
research
02/12/2021

Bias-Free Scalable Gaussian Processes via Randomized Truncations

Scalable Gaussian Process methods are computationally attractive, yet in...
research
03/09/2021

Active Testing: Sample-Efficient Model Evaluation

We introduce active testing: a new framework for sample-efficient model ...
research
09/27/2020

Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning

The impressive performance exhibited by modern machine learning models h...

Please sign up or login with your details

Forgot password? Click here to reset