PCR: Pessimistic Consistency Regularization for Semi-Supervised Segmentation

10/16/2022
by   Pengchong Qiao, et al.
0

Currently, state-of-the-art semi-supervised learning (SSL) segmentation methods employ pseudo labels to train their models, which is an optimistic training manner that supposes the predicted pseudo labels are correct. However, their models will be optimized incorrectly when the above assumption does not hold. In this paper, we propose a Pessimistic Consistency Regularization (PCR) which considers a pessimistic case that pseudo labels are not always correct. PCR makes it possible for our model to learn the ground truth (GT) in pessimism by adaptively providing a candidate label set containing K proposals for each unlabeled pixel. Specifically, we propose a pessimistic consistency loss which trains our model to learn the possible GT from multiple candidate labels. In addition, we develop a candidate label proposal method to adaptively decide which pseudo labels are provided for each pixel. Our method is easy to implement and could be applied to existing baselines without changing their frameworks. Theoretical analysis and experiments on various benchmarks demonstrate the superiority of our approach to state-of-the-art alternatives.

READ FULL TEXT

page 1

page 2

page 6

page 7

page 8

page 11

research
03/08/2022

Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

The crux of semi-supervised semantic segmentation is to assign adequate ...
research
03/06/2023

Pseudo Labels Regularization for Imbalanced Partial-Label Learning

Partial-label learning (PLL) is an important branch of weakly supervised...
research
06/04/2023

Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation

The crux of label-efficient semantic segmentation is to produce high-qua...
research
04/05/2019

A Regularization Approach for Instance-Based Superset Label Learning

Different from the traditional supervised learning in which each trainin...
research
06/22/2021

Credal Self-Supervised Learning

Self-training is an effective approach to semi-supervised learning. The ...
research
10/29/2019

Learning from Label Proportions with Consistency Regularization

The problem of learning from label proportions (LLP) involves training c...
research
10/28/2019

Mixup-breakdown: a consistency training method for improving generalization of speech separation models

Deep-learning based speech separation models confront poor generalizatio...

Please sign up or login with your details

Forgot password? Click here to reset