Errors-in-Variables for deep learning: rethinking aleatoric uncertainty

05/19/2021
by   Jörg Martin, et al.
0

We present a Bayesian treatment for deep regression using an Errors-in-Variables model which accounts for the uncertainty associated with the input to the employed neural network. It is shown how the treatment can be combined with already existing approaches for uncertainty quantification that are based on variational inference. Our approach yields a decomposition of the predictive uncertainty into an aleatoric and epistemic part that is more complete and, in many cases, more consistent from a statistical perspective. We illustrate and discuss the approach along various toy and real world examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2023

Direct Uncertainty Quantification

Traditional neural networks are simple to train but they produce overcon...
research
02/09/2023

A Benchmark on Uncertainty Quantification for Deep Learning Prognostics

Reliable uncertainty quantification on RUL prediction is crucial for inf...
research
05/09/2023

Fully Bayesian VIB-DeepSSM

Statistical shape modeling (SSM) enables population-based quantitative a...
research
02/13/2023

Fixing Overconfidence in Dynamic Neural Networks

Dynamic neural networks are a recent technique that promises a remedy fo...
research
07/19/2021

Epistemic Neural Networks

We introduce the epistemic neural network (ENN) as an interface for unce...
research
02/25/2020

Variational Inference and Bayesian CNNs for Uncertainty Estimation in Multi-Factorial Bone Age Prediction

Additionally to the extensive use in clinical medicine, biological age (...
research
07/04/2022

Look beyond labels: Incorporating functional summary information in Bayesian neural networks

Bayesian deep learning offers a principled approach to train neural netw...

Please sign up or login with your details

Forgot password? Click here to reset