Improving Regression Uncertainty Estimation Under Statistical Change

09/16/2021
by   Tony Tohme, et al.
0

While deep neural networks are highly performant and successful in a wide range of real-world problems, estimating their predictive uncertainty remains a challenging task. To address this challenge, we propose and implement a loss function for regression uncertainty estimation based on the Bayesian Validation Metric (BVM) framework while using ensemble learning. A series of experiments on in-distribution data show that the proposed method is competitive with existing state-of-the-art methods. In addition, experiments on out-of-distribution data show that the proposed method is robust to statistical change and exhibits superior predictive capability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2022

Density Regression and Uncertainty Quantification with Bayesian Deep Noise Neural Networks

Deep neural network (DNN) models have achieved state-of-the-art predicti...
research
11/24/2022

Estimating Regression Predictive Distributions with Sample Networks

Estimating the uncertainty in deep neural network predictions is crucial...
research
11/03/2020

The Aleatoric Uncertainty Estimation Using a Separate Formulation with Virtual Residuals

We propose a new optimization framework for aleatoric uncertainty estima...
research
12/08/2018

Sampling-based Bayesian Inference with gradient uncertainty

Deep neural networks(NNs) have achieved impressive performance, often ex...
research
12/17/2021

Improving evidential deep learning via multi-task learning

The Evidential regression network (ENet) estimates a continuous target a...
research
03/14/2019

Deep Distribution Regression

Due to their flexibility and predictive performance, machine-learning ba...
research
02/24/2023

Retrospective Uncertainties for Deep Models using Vine Copulas

Despite the major progress of deep models as learning machines, uncertai...

Please sign up or login with your details

Forgot password? Click here to reset