HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

12/10/2019
by   Ryan Szeto, et al.
20

Video-to-video translation for super-resolution, inpainting, style transfer, etc. is more difficult than corresponding image-to-image translation tasks due to the temporal consistency problem that, if left unaddressed, results in distracting flickering effects. Although video models designed from scratch produce temporally consistent results, training them to match the vast visual knowledge captured by image models requires an intractable number of videos. To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning. HyperCon works by translating a synthetic temporally interpolated video frame-wise and then aggregating over temporally localized windows on the interpolated video. It handles both masked and unmasked inputs, enabling support for even more video-to-video tasks than prior image-to-video model transfer techniques. We demonstrate HyperCon on video style transfer and inpainting, where it performs favorably compared to prior state-of-the-art video consistency and video inpainting methods, all without training on a single stylized or incomplete video.

READ FULL TEXT

page 3

page 6

page 8

page 11

page 12

page 13

page 15

page 16

research
08/01/2018

Learning Blind Video Temporal Consistency

Applying image processing algorithms independently to each frame of a vi...
research
05/30/2023

Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization

Portrait stylization, which translates a real human face image into an a...
research
04/21/2019

Automatic Temporally Coherent Video Colorization

Greyscale image colorization for applications in image restoration has s...
research
06/29/2023

Training-Free Neural Matte Extraction for Visual Effects

Alpha matting is widely used in video conferencing as well as in movies,...
research
04/03/2023

Tunable Convolutions with Parametric Multi-Loss Optimization

Behavior of neural networks is irremediably determined by the specific l...
research
10/20/2021

STALP: Style Transfer with Auxiliary Limited Pairing

We present an approach to example-based stylization of images that uses ...
research
05/30/2023

Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video Translation Using Conditional Image Diffusion Models

In this study, we present an efficient and effective approach for achiev...

Please sign up or login with your details

Forgot password? Click here to reset