Exploiting Spatial-Temporal Semantic Consistency for Video Scene Parsing
Compared with image scene parsing, video scene parsing introduces temporal information, which can effectively improve the consistency and accuracy of prediction. In this paper, we propose a Spatial-Temporal Semantic Consistency method to capture class-exclusive context information. Specifically, we design a spatial-temporal consistency loss to constrain the semantic consistency in spatial and temporal dimensions. In addition, we adopt an pseudo-labeling strategy to enrich the training dataset. We obtain the scores of 59.84 58.85 And our method wins the 1st place on VSPW challenge at ICCV2021.
READ FULL TEXT