Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias

09/20/2019
by   Asma Ghandeharioun, et al.
14

Supporting model interpretability for complex phenomena where annotators can legitimately disagree, such as emotion recognition, is a challenging machine learning task. In this work, we show that explicitly quantifying the uncertainty in such settings has interpretability benefits. We use a simple modification of a classical network inference using Monte Carlo dropout to give measures of epistemic and aleatoric uncertainty. We identify a significant correlation between aleatoric uncertainty and human annotator disagreement (r≈.3). Additionally, we demonstrate how difficult and subjective training samples can be identified using aleatoric uncertainty and how epistemic uncertainty can reveal data bias that could result in unfair predictions. We identify the total uncertainty as a suitable surrogate for model calibration, i.e. the degree we can trust model's predicted confidence. In addition to explainability benefits, we observe modest performance boosts from incorporating model uncertainty.

READ FULL TEXT
research
05/20/2022

A General Framework for quantifying Aleatoric and Epistemic uncertainty in Graph Neural Networks

Graph Neural Networks (GNN) provide a powerful framework that elegantly ...
research
08/01/2019

Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation

We present a sampling-free approach for computing the epistemic uncertai...
research
09/02/2021

MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator

Reliable Confidence Estimates are hugely important for any machine learn...
research
05/24/2023

Sampling-based Uncertainty Estimation for an Instance Segmentation Network

The examination of uncertainty in the predictions of machine learning (M...
research
09/05/2022

Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training

The quantification of uncertainty is important for the adoption of machi...
research
07/02/2018

Uncertainty in the Variational Information Bottleneck

We present a simple case study, demonstrating that Variational Informati...
research
10/21/2022

Considerations for Visualizing Uncertainty in Clinical Machine Learning Models

Clinician-facing predictive models are increasingly present in the healt...

Please sign up or login with your details

Forgot password? Click here to reset