Active Learning for Approximation of Expensive Functions with Normal Distributed Output Uncertainty

08/18/2016
by   Joachim van der Herten, et al.
0

When approximating a black-box function, sampling with active learning focussing on regions with non-linear responses tends to improve accuracy. We present the FLOLA-Voronoi method introduced previously for deterministic responses, and theoretically derive the impact of output uncertainty. The algorithm automatically puts more emphasis on exploration to provide more information to the models.

READ FULL TEXT
research
06/22/2020

Output-Weighted Importance Sampling for Bayesian Experimental Design and Uncertainty Quantification

We introduce a class of acquisition functions for sample selection that ...
research
07/31/2017

Interpretable Active Learning

Active learning has long been a topic of study in machine learning. Howe...
research
09/11/2019

On weighted uncertainty sampling in active learning

This note explores probabilistic sampling weighted by uncertainty in act...
research
10/27/2022

Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active Learning

We show that uncertainty sampling is sufficient to achieve exploration v...
research
04/05/2023

Batch mode active learning for efficient parameter estimation

For many tasks of data analysis, we may only have the information of the...
research
02/15/2020

Let Me At Least Learn What You Really Like: Dealing With Noisy Humans When Learning Preferences

Learning the preferences of a human improves the quality of the interact...
research
01/31/2019

Bayesian active learning for optimization and uncertainty quantification in protein docking

Motivation: Ab initio protein docking represents a major challenge for o...

Please sign up or login with your details

Forgot password? Click here to reset