StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN

07/15/2021
by   Gereon Fox, et al.
0

Generative adversarial models (GANs) continue to produce advances in terms of the visual quality of still images, as well as the learning of temporal correlations. However, few works manage to combine these two interesting capabilities for the synthesis of video content: Most methods require an extensive training dataset in order to learn temporal correlations, while being rather limited in the resolution and visual quality of their output frames. In this paper, we present a novel approach to the video synthesis problem that helps to greatly improve visual quality and drastically reduce the amount of training data and resources necessary for generating video content. Our formulation separates the spatial domain, in which individual frames are synthesized, from the temporal domain, in which motion is generated. For the spatial domain we make use of a pre-trained StyleGAN network, the latent space of which allows control over the appearance of the objects it was trained for. The expressive power of this model allows us to embed our training videos in the StyleGAN latent space. Our temporal architecture is then trained not on sequences of RGB frames, but on sequences of StyleGAN latent codes. The advantageous properties of the StyleGAN space simplify the discovery of temporal correlations. We demonstrate that it suffices to train our temporal architecture on only 10 minutes of footage of 1 subject for about 6 hours. After training, our model can not only generate new portrait videos for the training subject, but also for any random subject which can be embedded in the StyleGAN space.

READ FULL TEXT

page 2

page 5

page 7

page 8

page 9

page 14

page 15

research
03/21/2020

Non-Adversarial Video Synthesis with Learned Priors

Most of the existing works in video synthesis focus on generating videos...
research
05/15/2020

Face Identity Disentanglement via Latent Space Mapping

Learning disentangled representations of data is a fundamental problem i...
research
04/12/2023

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

We propose VidStyleODE, a spatiotemporally continuous disentangled Video...
research
02/25/2019

Generative Models for Low-Rank Video Representation and Reconstruction

Finding compact representation of videos is an essential component in al...
research
11/20/2022

MagicVideo: Efficient Video Generation With Latent Diffusion Models

We present an efficient text-to-video generation framework based on late...
research
10/16/2019

Exploiting video sequences for unsupervised disentangling in generative adversarial networks

In this work we present an adversarial training algorithm that exploits ...
research
07/16/2021

CCVS: Context-aware Controllable Video Synthesis

This presentation introduces a self-supervised learning approach to the ...

Please sign up or login with your details

Forgot password? Click here to reset