Perceptual Consistency in Video Segmentation

10/24/2021
by   Yizhe Zhang, et al.
0

In this paper, we present a novel perceptual consistency perspective on video semantic segmentation, which can capture both temporal consistency and pixel-wise correctness. Given two nearby video frames, perceptual consistency measures how much the segmentation decisions agree with the pixel correspondences obtained via matching general perceptual features. More specifically, for each pixel in one frame, we find the most perceptually correlated pixel in the other frame. Our intuition is that such a pair of pixels are highly likely to belong to the same class. Next, we assess how much the segmentation agrees with such perceptual correspondences, based on which we derive the perceptual consistency of the segmentation maps across these two frames. Utilizing perceptual consistency, we can evaluate the temporal consistency of video segmentation by measuring the perceptual consistency over consecutive pairs of segmentation maps in a video. Furthermore, given a sparsely labeled test video, perceptual consistency can be utilized to aid with predicting the pixel-wise correctness of the segmentation on an unlabeled frame. More specifically, by measuring the perceptual consistency between the predicted segmentation and the available ground truth on a nearby frame and combining it with the segmentation confidence, we can accurately assess the classification correctness on each pixel. Our experiments show that the proposed perceptual consistency can more accurately evaluate the temporal consistency of video segmentation as compared to flow-based measures. Furthermore, it can help more confidently predict segmentation accuracy on unlabeled test frames, as compared to using classification confidence alone. Finally, our proposed measure can be used as a regularizer during the training of segmentation models, which leads to more temporally consistent video segmentation while maintaining accuracy.

READ FULL TEXT

page 2

page 12

research
11/28/2019

Every Frame Counts: Joint Learning of Video Segmentation and Optical Flow

A major challenge for video semantic segmentation is the lack of labeled...
research
08/03/2020

Frame-To-Frame Consistent Semantic Segmentation

In this work, we aim for temporally consistent semantic segmentation thr...
research
10/24/2021

AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally Consistent Video Semantic Segmentation

In video segmentation, generating temporally consistent results across f...
research
07/27/2022

Efficient Video Deblurring Guided by Motion Magnitude

Video deblurring is a highly under-constrained problem due to the spatia...
research
02/26/2020

Efficient Semantic Video Segmentation with Per-frame Inference

In semantic segmentation, most existing real-time deep models trained wi...
research
09/05/2018

Temporally Coherent Video Harmonization Using Adversarial Networks

Compositing is one of the most important editing operations for images a...
research
08/04/2019

Fully Automatic Video Colorization with Self-Regularization and Diversity

We present a fully automatic approach to video colorization with self-re...

Please sign up or login with your details

Forgot password? Click here to reset