SVCNet: Scribble-based Video Colorization Network with Temporal Aggregation

03/21/2023
by   Yuzhi Zhao, et al.
0

In this paper, we propose a scribble-based video colorization network with temporal aggregation called SVCNet. It can colorize monochrome videos based on different user-given color scribbles. It addresses three common issues in the scribble-based video colorization area: colorization vividness, temporal consistency, and color bleeding. To improve the colorization quality and strengthen the temporal consistency, we adopt two sequential sub-networks in SVCNet for precise colorization and temporal smoothing, respectively. The first stage includes a pyramid feature encoder to incorporate color scribbles with a grayscale frame, and a semantic feature encoder to extract semantics. The second stage finetunes the output from the first stage by aggregating the information of neighboring colorized frames (as short-range connections) and the first colorized frame (as a long-range connection). To alleviate the color bleeding artifacts, we learn video colorization and segmentation simultaneously. Furthermore, we set the majority of operations on a fixed small image resolution and use a Super-resolution Module at the tail of SVCNet to recover original sizes. It allows the SVCNet to fit different image resolutions at the inference. Finally, we evaluate the proposed SVCNet on DAVIS and Videvo benchmarks. The experimental results demonstrate that SVCNet produces both higher-quality and more temporally consistent videos than other well-known video colorization approaches. The codes and models can be found at https://github.com/zhaoyuzhi/SVCNet.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 9

page 10

page 11

page 12

research
07/18/2022

Enhancing Space-time Video Super-resolution via Spatial-temporal Feature Interaction

The target of space-time video super-resolution (STVSR) is to increase b...
research
04/26/2021

VCGAN: Video Colorization with Hybrid Generative Adversarial Network

We propose a hybrid recurrent Video Colorization with Hybrid Generative ...
research
04/15/2021

Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution

In this paper, we address the space-time video super-resolution, which a...
research
04/06/2022

Video Demoireing with Relation-Based Temporal Consistency

Moire patterns, appearing as color distortions, severely degrade image a...
research
11/30/2021

Revisiting Temporal Alignment for Video Restoration

Long-range temporal alignment is critical yet challenging for video rest...
research
06/08/2022

Learning Task Agnostic Temporal Consistency Correction

Due to the scarcity of video processing methodologies, image processing ...
research
09/18/2020

DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement

The remastering of vintage film comprises of a diversity of sub-tasks in...

Please sign up or login with your details

Forgot password? Click here to reset