Near-Optimal Active Learning of Multi-Output Gaussian Processes

11/21/2015
by   Yehong Zhang, et al.
0

This paper addresses the problem of active learning of a multi-output Gaussian process (MOGP) model representing multiple types of coexisting correlated environmental phenomena. In contrast to existing works, our active learning problem involves selecting not just the most informative sampling locations to be observed but also the types of measurements at each selected location for minimizing the predictive uncertainty (i.e., posterior joint entropy) of a target phenomenon of interest given a sampling budget. Unfortunately, such an entropy criterion scales poorly in the numbers of candidate sampling locations and selected observations when optimized. To resolve this issue, we first exploit a structure common to sparse MOGP models for deriving a novel active learning criterion. Then, we exploit a relaxed form of submodularity property of our new criterion for devising a polynomial-time approximation algorithm that guarantees a constant-factor approximation of that achieved by the optimal set of selected observations. Empirical evaluation on real-world datasets shows that our proposed approach outperforms existing algorithms for active learning of MOGP and single-output GP models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2022

Safe Active Learning for Multi-Output Gaussian Processes

Multi-output regression problems are commonly encountered in science and...
research
02/09/2012

Active Bayesian Optimization: Minimizing Minimizer Entropy

The ultimate goal of optimization is to find the minimizer of a target f...
research
09/17/2012

Submodularity in Batch Active Learning and Survey Problems on Gaussian Random Fields

Many real-world datasets can be represented in the form of a graph whose...
research
06/07/2023

Training-Free Neural Active Learning with Initialization-Robustness Guarantees

Existing neural active learning algorithms have aimed to optimize the pr...
research
10/24/2022

Active Learning for Single Neuron Models with Lipschitz Non-Linearities

We consider the problem of active learning for single neuron models, als...
research
03/08/2019

Active learning for enumerating local minima based on Gaussian process derivatives

We study active learning (AL) based on Gaussian Processes (GPs) for effi...
research
12/22/2021

Simple and near-optimal algorithms for hidden stratification and multi-group learning

Multi-group agnostic learning is a formal learning criterion that is con...

Please sign up or login with your details

Forgot password? Click here to reset