Evaluating AI systems under uncertain ground truth: a case study in dermatology

07/05/2023
by   David Stutz, et al.
0

For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.

READ FULL TEXT

page 1

page 5

page 10

page 41

research
10/12/2021

Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations

Majority voting and averaging are common approaches employed to resolve ...
research
07/18/2023

Conformal prediction under ambiguous ground truth

In safety-critical classification tasks, conformal prediction allows to ...
research
01/08/2021

Approaching Neural Network Uncertainty Realism

Statistical models are inherently uncertain. Quantifying or at least upp...
research
10/28/2022

An Approach for Noisy, Crowdsourced Datasets Utilizing Ensemble Modeling, 'Human Softmax' Distributions, and Entropic Measures of Uncertainty

Noisy, crowdsourced image datasets prove challenging, even for the best ...
research
04/23/2019

Statistical Learning and Estimation of Piano Fingering

Automatic estimation of piano fingering is important for computationally...
research
06/04/2018

Efficient Online Scalar Annotation with Bounded Support

We describe a novel method for efficiently eliciting scalar annotations ...
research
03/05/2019

Virtual Ground Truth, and Pre-selection of 3D Interest Points for Improved Repeatability Evaluation of 2D Detectors

In Computer Vision, finding simple features is performed using classifie...

Please sign up or login with your details

Forgot password? Click here to reset