Log In Sign Up

VCGAN: Video Colorization with Hybrid Generative Adversarial Network

by   Yuzhi Zhao, et al.

We propose a hybrid recurrent Video Colorization with Hybrid Generative Adversarial Network (VCGAN), an improved approach to video colorization using end-to-end learning. The VCGAN addresses two prevalent issues in the video colorization domain: Temporal consistency and unification of colorization network and refinement network into a single architecture. To enhance colorization quality and spatiotemporal consistency, the mainstream of generator in VCGAN is assisted by two additional networks, i.e., global feature extractor and placeholder feature extractor, respectively. The global feature extractor encodes the global semantics of grayscale input to enhance colorization quality, whereas the placeholder feature extractor acts as a feedback connection to encode the semantics of the previous colorized frame in order to maintain spatiotemporal consistency. If changing the input for placeholder feature extractor as grayscale input, the hybrid VCGAN also has the potential to perform image colorization. To improve the consistency of far frames, we propose a dense long-term loss that smooths the temporal disparity of every two remote frames. Trained with colorization and temporal losses jointly, VCGAN strikes a good balance between color vividness and video continuity. Experimental results demonstrate that VCGAN produces higher-quality and temporally more consistent colorful videos than existing approaches.


page 1

page 6

page 7

page 9

page 10

page 11

page 13

page 14


Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

In the deep learning era, long video generation of high-quality still re...

Generative Adversarial Networks for Video-to-Video Domain Adaptation

Endoscopic videos from multicentres often have different imaging conditi...

Learning Blind Video Temporal Consistency

Applying image processing algorithms independently to each frame of a vi...

Fully Automatic Video Colorization with Self-Regularization and Diversity

We present a fully automatic approach to video colorization with self-re...

Temporal Generative Adversarial Nets with Singular Value Clipping

In this paper, we propose a generative model, Temporal Generative Advers...

World-Consistent Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims for converting high-level semant...

Automatic Video Colorization using 3D Conditional Generative Adversarial Networks

In this work, we present a method for automatic colorization of grayscal...