Learning Agreement from Multi-source Annotations for Medical Image Segmentation

04/02/2023
by   Yifeng Wang, et al.
0

In medical image analysis, it is typical to merge multiple independent annotations as ground truth to mitigate the bias caused by individual annotation preference. However, arbitrating the final annotation is not always effective because new biases might be produced during the process, especially when there are significant variations among annotations. This paper proposes a novel Uncertainty-guided Multi-source Annotation Network (UMA-Net) to learn medical image segmentation directly from multiple annotations. UMA-Net consists of a UNet with two quality-specific predictors, an Annotation Uncertainty Estimation Module (AUEM) and a Quality Assessment Module (QAM). Specifically, AUEM estimates pixel-wise uncertainty maps of each annotation and encourages them to reach an agreement on reliable pixels/voxels. The uncertainty maps then guide the UNet to learn from the reliable pixels/voxels by weighting the segmentation loss. QAM grades the uncertainty maps into high-quality or low-quality groups based on assessment scores. The UNet is further implemented to contain a high-quality learning head (H-head) and a low-quality learning head (L-head). H-head purely learns with high-quality uncertainty maps to avoid error accumulation and keeps strong prediction ability, while L-head leverages the low-quality uncertainty maps to assist the backbone to learn maximum representation knowledge. UNet with H-head will be reserved during the inference stage, and the rest of the modules can be removed freely for computational efficiency. We conduct extensive experiments on an unsupervised 3D segmentation task and a supervised 2D segmentation task, respectively. The results show that our proposed UMA-Net outperforms state-of-the-art approaches, demonstrating its generality and effectiveness.

READ FULL TEXT

page 3

page 8

page 12

page 13

research
08/16/2023

Hierarchical Uncertainty Estimation for Medical Image Segmentation Networks

Learning a medical image segmentation model is an inherently ambiguous t...
research
11/26/2021

Modeling Human Preference and Stochastic Error for Medical Image Segmentation with Multiple Annotators

Manual annotation of medical images is highly subjective, leading to ine...
research
08/15/2023

Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation

Medical image segmentation modeling is a high-stakes task where understa...
research
06/02/2023

Transformer-based Annotation Bias-aware Medical Image Segmentation

Manual medical image segmentation is subjective and suffers from annotat...
research
04/03/2022

Exemplar Learning for Medical Image Segmentation

Medical image annotation typically requires expert knowledge and hence i...
research
07/07/2020

Meta Corrupted Pixels Mining for Medical Image Segmentation

Deep neural networks have achieved satisfactory performance in piles of ...
research
09/08/2021

AgreementLearning: An End-to-End Framework for Learning with Multiple Annotators without Groundtruth

The annotation of domain experts is important for some medical applicati...

Please sign up or login with your details

Forgot password? Click here to reset