Intrinsic uncertainties and where to find them

07/06/2021
by   Francesco Farina, et al.
0

We introduce a framework for uncertainty estimation that both describes and extends many existing methods. We consider typical hyperparameters involved in classical training as random variables and marginalise them out to capture various sources of uncertainty in the parameter space. We investigate which forms and combinations of marginalisation are most useful from a practical point of view on standard benchmarking data sets. Moreover, we discuss how some marginalisations may produce reliable estimates of uncertainty without the need for extensive hyperparameter tuning and/or large-scale ensembling.

READ FULL TEXT
research
09/22/2022

Scalable Gaussian Process Hyperparameter Optimization via Coverage Regularization

Gaussian processes (GPs) are Bayesian non-parametric models popular in a...
research
10/19/2020

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

Many optimizers have been proposed for training deep neural networks, an...
research
02/26/2018

Tunability: Importance of Hyperparameters of Machine Learning Algorithms

Modern machine learning algorithms for classification or regression such...
research
09/08/2015

A Variational Bayesian State-Space Approach to Online Passive-Aggressive Regression

Online Passive-Aggressive (PA) learning is a class of online margin-base...
research
07/08/2021

Likelihood-Free Frequentist Inference: Bridging Classical Statistics and Machine Learning in Simulation and Uncertainty Quantification

Many areas of science make extensive use of computer simulators that imp...
research
05/26/2022

Towards Learning Universal Hyperparameter Optimizers with Transformers

Meta-learning hyperparameter optimization (HPO) algorithms from prior ex...

Please sign up or login with your details

Forgot password? Click here to reset