TapLab: A Fast Framework for Semantic Video Segmentation Tapping into Compressed-Domain Knowledge

03/30/2020
by   Junyi Feng, et al.
15

Real-time semantic video segmentation is a challenging task due to the strict requirements of inference speed. Recent approaches mainly devote great efforts to reducing the model size for high efficiency. In this paper, we rethink this problem from a different viewpoint: using knowledge contained in compressed videos. We propose a simple and effective framework, dubbed TapLab, to tap into resources from the compressed domain. Specifically, we design a fast feature warping module using motion vectors for acceleration. To reduce the noise introduced by motion vectors, we design a residual-guided correction module and a residual-guided frame selection module using residuals. Compared with the state-of-the-art fast semantic image segmentation models, our proposed TapLab significantly reduces redundant computations, running around 3 times faster with comparable accuracy for 1024x2048 video. The experimental results show that TapLab achieves 70.6 single GPU card. A high-speed version even reaches the speed of 160+ FPS.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset