BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization

12/05/2022
by   Yixin Yang, et al.
0

How to effectively explore the colors of reference exemplars and propagate them to colorize each frame is vital for exemplar-based video colorization. In this paper, we present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization by a bidirectional temporal feature fusion with the guidance of semantic image prior. We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars. Then, to better propagate the colors of reference exemplars into each frame and avoid the inaccurate matches colors from exemplars we develop a simple yet effective bidirectional temporal feature fusion module to better colorize each frame. We note that there usually exist color-bleeding artifacts around the boundaries of the important objects in videos. To overcome this problem, we further develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process for better performance. In addition, we develop a multi-scale recurrent block to progressively colorize frames in a coarse-to-fine manner. Extensive experimental results demonstrate that the proposed BiSTNet performs favorably against state-of-the-art methods on the benchmark datasets. Our code will be made available at <https://yyang181.github.io/BiSTNet/>

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

research
11/25/2020

Reference-Based Video Colorization with Spatiotemporal Correspondence

We propose a novel reference-based video colorization framework with spa...
research
05/23/2023

FlowChroma – A Deep Recurrent Neural Network for Video Colorization

We develop an automated video colorization framework that minimizes the ...
research
04/06/2020

Cascaded Deep Video Deblurring Using Temporal Sharpness Prior

We present a simple and effective deep convolutional neural network (CNN...
research
03/06/2023

Butterfly: Multiple Reference Frames Feature Propagation Mechanism for Neural Video Compression

Using more reference frames can significantly improve the compression ef...
research
12/02/2021

Semantic-Sparse Colorization Network for Deep Exemplar-based Colorization

Exemplar-based colorization approaches rely on reference image to provid...
research
11/28/2010

Video Stippling

In this paper, we consider rendering color videos using a non-photo-real...
research
09/16/2020

Dual Semantic Fusion Network for Video Object Detection

Video object detection is a tough task due to the deteriorated quality o...

Please sign up or login with your details

Forgot password? Click here to reset