Label fusion and training methods for reliable representation of inter-rater uncertainty

02/15/2022
by   Andréanne Lemay, et al.
11

Medical tasks are prone to inter-rater variability due to multiple factors such as image quality, professional experience and training, or guideline clarity. Training deep learning networks with annotations from multiple raters is a common practice that mitigates the model's bias towards a single expert. Reliable models generating calibrated outputs and reflecting the inter-rater disagreement are key to the integration of artificial intelligence in clinical practice. Various methods exist to take into account different expert labels. We focus on comparing three label fusion methods: STAPLE, average of the rater's segmentation, and random sampling each rater's segmentation during training. Each label fusion method is studied using the conventional training framework or the recently published SoftSeg framework that limits information loss by treating the segmentation task as a regression. Our results, across 10 data splittings on two public datasets, indicate that SoftSeg models, regardless of the ground truth fusion method, had better calibration and preservation of the inter-rater rater variability compared with their conventional counterparts without impacting the segmentation performance. Conventional models, i.e., trained with a Dice loss, with binary inputs, and sigmoid/softmax final activate, were overconfident and underestimated the uncertainty associated with inter-rater variability. Conversely, fusing labels by averaging with the SoftSeg framework led to underconfident outputs and overestimation of the rater disagreement. In terms of segmentation performance, the best label fusion method was different for the two datasets studied, indicating this parameter might be task-dependent. However, SoftSeg had segmentation performance systematically superior or equal to the conventionally trained models and had the best calibration and preservation of the inter-rater variability.

READ FULL TEXT

page 12

page 13

page 14

page 17

page 18

research
06/07/2018

On the Effect of Inter-observer Variability for a Reliable Estimation of Uncertainty of Medical Image Segmentation

Uncertainty estimation methods are expected to improve the understanding...
research
04/04/2023

A Data Fusion Framework for Multi-Domain Morality Learning

Language models can be trained to recognize the moral sentiment of text,...
research
04/12/2021

Spatially Varying Label Smoothing: Capturing Uncertainty from Expert Annotations

The task of image segmentation is inherently noisy due to ambiguities re...
research
05/05/2021

Impact of individual rater style on deep learning uncertainty in medical imaging segmentation

While multiple studies have explored the relation between inter-rater va...
research
11/18/2020

SoftSeg: Advantages of soft versus binary training for image segmentation

Most image segmentation algorithms are trained on binary masks formulate...
research
12/14/2020

D-LEMA: Deep Learning Ensembles from Multiple Annotations – Application to Skin Lesion Segmentation

Medical image segmentation annotations suffer from inter/intra-observer ...

Please sign up or login with your details

Forgot password? Click here to reset