Mixed-supervised segmentation: Confidence maximization helps knowledge distillation

09/21/2021
by   Bingyuan Liu, et al.
8

Despite achieving promising results in a breadth of medical image segmentation tasks, deep neural networks require large training datasets with pixel-wise annotations. Obtaining these curated datasets is a cumbersome process which limits the application in scenarios where annotated images are scarce. Mixed supervision is an appealing alternative for mitigating this obstacle, where only a small fraction of the data contains complete pixel-wise annotations and other images have a weaker form of supervision. In this work, we propose a dual-branch architecture, where the upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch. Combined with a standard cross-entropy loss over the labeled pixels, our novel formulation integrates two important terms: (i) a Shannon entropy loss defined over the less-supervised images, which encourages confident student predictions in the bottom branch; and (ii) a Kullback-Leibler (KL) divergence term, which transfers the knowledge of the strongly supervised branch to the less-supervised branch and guides the entropy (student-confidence) term to avoid trivial solutions. We show that the synergy between the entropy and KL divergence yields substantial improvements in performance. We also discuss an interesting link between Shannon-entropy minimization and standard pseudo-mask generation, and argue that the former should be preferred over the latter for leveraging information from unlabeled pixels. Quantitative and qualitative results on two publicly available datasets demonstrate that our method significantly outperforms other strategies for semantic segmentation within a mixed-supervision framework, as well as recent semi-supervised approaches. Moreover, we show that the branch trained with reduced supervision and guided by the top branch largely outperforms the latter.

READ FULL TEXT

page 4

page 7

page 10

page 11

research
12/15/2020

Teach me to segment with mixed supervision: Confident students become masters

Deep segmentation neural networks require large training datasets with p...
research
05/11/2020

Medical Image Segmentation Using a U-Net type of Architecture

Deep convolutional neural networks have been proven to be very effective...
research
12/29/2022

MagicNet: Semi-Supervised Multi-Organ Segmentation via Magic-Cube Partition and Recovery

We propose a novel teacher-student model for semi-supervised multi-organ...
research
07/12/2018

Learning to Segment Medical Images with Scribble-Supervision Alone

Semantic segmentation of medical images is a crucial step for the quanti...
research
07/05/2022

ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image Segmentation

While deep models have shown promising performance in medical image segm...
research
12/01/2021

Reference-guided Pseudo-Label Generation for Medical Semantic Segmentation

Producing densely annotated data is a difficult and tedious task for med...
research
07/19/2023

GenKL: An Iterative Framework for Resolving Label Ambiguity and Label Non-conformity in Web Images Via a New Generalized KL Divergence

Web image datasets curated online inherently contain ambiguous in-distri...

Please sign up or login with your details

Forgot password? Click here to reset