Improving Semantic Segmentation via Self-Training

by   Yi Zhu, et al.

Deep learning usually achieves the best results with complete supervision. In the case of semantic segmentation, this means that large amounts of pixelwise annotations are required to learn accurate models. In this paper, we show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm. We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data. Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets while requiring significantly less supervision. We also demonstrate the effectiveness of self-training on a challenging cross-domain generalization task, outperforming conventional finetuning method by a large margin. Lastly, to alleviate the computational burden caused by the large amount of pseudo labels, we propose a fast training schedule to accelerate the training of segmentation models by up to 2x without performance degradation.


page 2

page 15

page 22

page 23

page 24

page 26


Learning from Future: A Novel Self-Training Framework for Semantic Segmentation

Self-training has shown great potential in semi-supervised learning. Its...

UCC: Uncertainty guided Cross-head Co-training for Semi-Supervised Semantic Segmentation

Deep neural networks (DNNs) have witnessed great successes in semantic s...

Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and Semi-Supervised Semantic Segmentation

Semantic segmentation with limited annotations, such as weakly supervise...

Self-Ensembling GAN for Cross-Domain Semantic Segmentation

Deep neural networks (DNNs) have greatly contributed to the performance ...

Semi-supervision semantic segmentation with uncertainty-guided self cross supervision

As a powerful way of realizing semi-supervised segmentation, the cross s...

Auto-Annotation Quality Prediction for Semi-Supervised Learning with Ensembles

Auto-annotation by ensemble of models is an efficient method of learning...

Warp-Refine Propagation: Semi-Supervised Auto-labeling via Cycle-consistency

Deep learning models for semantic segmentation rely on expensive, large-...