Interactive Video Stylization Using Few-Shot Patch-Based Training

04/29/2020
by   Ondřej Texler, et al.
0

In this paper, we present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence. Its key advantage is that the resulting stylization is semantically meaningful, i.e., specific parts of moving objects are stylized according to the artist's intention. In contrast to previous style transfer techniques, our approach does not require any lengthy pre-training process nor a large training dataset. We demonstrate how to train an appearance translation network from scratch using only a few stylized exemplars while implicitly preserving temporal consistency. This leads to a video stylization framework that supports real-time inference, parallel processing, and random access to an arbitrary output frame. It can also merge the content from multiple keyframes without the need to perform an explicit blending operation. We demonstrate its practical utility in various interactive scenarios, where the user paints over a selected keyframe and sees her style transferred to an existing recorded sequence or a live video stream.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

page 8

page 9

page 10

research
10/20/2021

STALP: Style Transfer with Auxiliary Limited Pairing

We present an approach to example-based stylization of images that uses ...
research
03/16/2023

NLUT: Neural-based 3D Lookup Tables for Video Photorealistic Style Transfer

Video photorealistic style transfer is desired to generate videos with a...
research
09/18/2023

Instant Photorealistic Style Transfer: A Lightweight and Adaptive Approach

In this paper, we propose an Instant Photorealistic Style Transfer (IPST...
research
10/20/2020

Real-time Localized Photorealistic Video Style Transfer

We present a novel algorithm for transferring artistic styles of semanti...
research
09/09/2021

Generic resources are what you need: Style transfer tasks without task-specific parallel training data

Style transfer aims to rewrite a source text in a different target style...
research
03/27/2017

Coherent Online Video Style Transfer

Training a feed-forward network for fast neural style transfer of images...
research
07/04/2018

VideoKifu, or the automatic transcription of a Go game

In two previous papers [arXiv:1508.03269, arXiv:1701.05419] we described...

Please sign up or login with your details

Forgot password? Click here to reset