A Robust Deep Attention Network to Noisy Labels in Semi-supervised Biomedical Segmentation

07/31/2018
by   Shaobo Min, et al.
6

Learning-based methods suffer from limited clean annotations, especially for biomedical segmentation. For example, the noisy labels make model confused and the limited labels lead to an inadequate training, which are usually concomitant. In this paper, we propose a deep attention networks (DAN) that is more robust to noisy labels by eliminating the bad gradients caused by noisy labels, using attention modules. Especially, the strategy of multi-stage filtering is applied, because clear elimination in a certain layer is impossible. As the prior knowledge of noise distribution is usually unavailable, a two-stream network is developed to provide information from each other for attention modules to mine potential distribution of noisy gradients. The intuition is that a discussion of two students may find out mistakes taught by teacher. And we further analyse the infection processing of noisy labels and design three attention modules, according to different disturbance of noisy labels in different layers. Furthermore, a hierarchical distillation is developed to provide more reliable pseudo labels from unlabeld data, which further boosts the DAN. Combining our DAN and hierarchical distillation can significantly improve a model performance with deficient clean annotations. The experiments on both HVSMR 2016 and BRATS 2015 benchmarks demonstrate the effectiveness of our method.

READ FULL TEXT

page 1

page 4

page 7

research
10/17/2022

Bootstrapping the Relationship Between Images and Their Clean and Noisy Labels

Many state-of-the-art noisy-label learning methods rely on learning mech...
research
12/02/2021

Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

Imperfect labels are ubiquitous in real-world datasets and seriously har...
research
12/06/2021

Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning with Label Noise

Noisy labels damage the performance of deep networks. For robust learnin...
research
05/26/2023

ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression

Noise suppression (NS) models have been widely applied to enhance speech...
research
03/25/2021

Transform consistency for learning with noisy labels

It is crucial to distinguish mislabeled samples for dealing with noisy l...
research
03/07/2017

Learning from Noisy Labels with Distillation

The ability of learning from noisy labels is very useful in many visual ...

Please sign up or login with your details

Forgot password? Click here to reset