Learning from Future: A Novel Self-Training Framework for Semantic Segmentation

09/15/2022
by   Ye Du, et al.
0

Self-training has shown great potential in semi-supervised learning. Its core idea is to use the model learned on labeled data to generate pseudo-labels for unlabeled samples, and in turn teach itself. To obtain valid supervision, active attempts typically employ a momentum teacher for pseudo-label prediction yet observe the confirmation bias issue, where the incorrect predictions may provide wrong supervision signals and get accumulated in the training process. The primary cause of such a drawback is that the prevailing self-training framework acts as guiding the current state with previous knowledge, because the teacher is updated with the past student only. To alleviate this problem, we propose a novel self-training strategy, which allows the model to learn from the future. Concretely, at each training step, we first virtually optimize the student (i.e., caching the gradients without applying them to the model weights), then update the teacher with the virtual future student, and finally ask the teacher to produce pseudo-labels for the current student as the guidance. In this way, we manage to improve the quality of pseudo-labels and thus boost the performance. We also develop two variants of our future-self-training (FST) framework through peeping at the future both deeply (FST-D) and widely (FST-W). Taking the tasks of unsupervised domain adaptive semantic segmentation and semi-supervised semantic segmentation as the instances, we experimentally demonstrate the effectiveness and superiority of our approach under a wide range of settings. Code will be made publicly available.

READ FULL TEXT

page 7

page 18

research
09/29/2022

Online pseudo labeling for polyp segmentation with momentum networks

Semantic segmentation is an essential task in developing medical image d...
research
04/30/2020

Improving Semantic Segmentation via Self-Training

Deep learning usually achieves the best results with complete supervisio...
research
09/15/2021

Self-Training with Differentiable Teacher

Self-training achieves enormous success in various semi-supervised and w...
research
08/30/2021

Seminar Learning for Click-Level Weakly Supervised Semantic Segmentation

Annotation burden has become one of the biggest barriers to semantic seg...
research
12/12/2020

Teacher-Student Asynchronous Learning with Multi-Source Consistency for Facial Landmark Detection

Due to the high annotation cost of large-scale facial landmark detection...
research
07/23/2022

Combining Hybrid Architecture and Pseudo-label for Semi-supervised Abdominal Organ Segmentation

Abdominal organ segmentation has many important clinical applications, s...
research
10/14/2022

PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population

Commonsense Knowledge Base (CSKB) Population aims at reasoning over unse...

Please sign up or login with your details

Forgot password? Click here to reset