DeepAI
Log In Sign Up

VCGAN: Video Colorization with Hybrid Generative Adversarial Network

04/26/2021
by   Yuzhi Zhao, et al.
16

We propose a hybrid recurrent Video Colorization with Hybrid Generative Adversarial Network (VCGAN), an improved approach to video colorization using end-to-end learning. The VCGAN addresses two prevalent issues in the video colorization domain: Temporal consistency and unification of colorization network and refinement network into a single architecture. To enhance colorization quality and spatiotemporal consistency, the mainstream of generator in VCGAN is assisted by two additional networks, i.e., global feature extractor and placeholder feature extractor, respectively. The global feature extractor encodes the global semantics of grayscale input to enhance colorization quality, whereas the placeholder feature extractor acts as a feedback connection to encode the semantics of the previous colorized frame in order to maintain spatiotemporal consistency. If changing the input for placeholder feature extractor as grayscale input, the hybrid VCGAN also has the potential to perform image colorization. To improve the consistency of far frames, we propose a dense long-term loss that smooths the temporal disparity of every two remote frames. Trained with colorization and temporal losses jointly, VCGAN strikes a good balance between color vividness and video continuity. Experimental results demonstrate that VCGAN produces higher-quality and temporally more consistent colorful videos than existing approaches.

READ FULL TEXT

page 1

page 6

page 7

page 9

page 10

page 11

page 13

page 14

02/21/2022

Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

In the deep learning era, long video generation of high-quality still re...
04/17/2020

Generative Adversarial Networks for Video-to-Video Domain Adaptation

Endoscopic videos from multicentres often have different imaging conditi...
08/01/2018

Learning Blind Video Temporal Consistency

Applying image processing algorithms independently to each frame of a vi...
08/04/2019

Fully Automatic Video Colorization with Self-Regularization and Diversity

We present a fully automatic approach to video colorization with self-re...
11/21/2016

Temporal Generative Adversarial Nets with Singular Value Clipping

In this paper, we propose a generative model, Temporal Generative Advers...
07/16/2020

World-Consistent Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims for converting high-level semant...
05/08/2019

Automatic Video Colorization using 3D Conditional Generative Adversarial Networks

In this work, we present a method for automatic colorization of grayscal...