DeepAI AI Chat
Log In Sign Up

Validating uncertainty in medical image translation

02/11/2020
by   Jacob C. Reinhold, et al.
0

Medical images are increasingly used as input to deep neural networks to produce quantitative values that aid researchers and clinicians. However, standard deep neural networks do not provide a reliable measure of uncertainty in those quantitative values. Recent work has shown that using dropout during training and testing can provide estimates of uncertainty. In this work, we investigate using dropout to estimate epistemic and aleatoric uncertainty in a CT-to-MR image translation task. We show that both types of uncertainty are captured, as defined, providing confidence in the output uncertainty estimates.

READ FULL TEXT

page 3

page 4

11/02/2018

Frequentist uncertainty estimates for deep learning

We provide frequentist estimates of aleatoric and epistemic uncertainty ...
03/09/2019

BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors

One of the challenging aspects of incorporating deep neural networks int...
06/01/2021

Quantifying Predictive Uncertainty in Medical Image Analysis with Deep Kernel Learning

Deep neural networks are increasingly being used for the analysis of med...
07/27/2020

A Bayesian Hierarchical Network for Combining Heterogeneous Data Sources in Medical Diagnoses

Computer-Aided Diagnosis has shown stellar performance in providing accu...
03/01/2022

How certain are your uncertainties?

Having a measure of uncertainty in the output of a deep learning method ...
09/22/2021

A Quantitative Comparison of Epistemic Uncertainty Maps Applied to Multi-Class Segmentation

Uncertainty assessment has gained rapid interest in medical image analys...