StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

12/29/2021
by   Ivan Skorokhodov, et al.
18

Videos show continuous events, yet most - if not all - video synthesis frameworks treat them discretely in time. In this work, we think of videos of what they should be - time-continuous signals, and extend the paradigm of neural representations to build a continuous-time video generator. For this, we first design continuous motion representations through the lens of positional embeddings. Then, we explore the question of training on very sparse videos and demonstrate that a good generator can be learned by using as few as 2 frames per clip. After that, we rethink the traditional image and video discriminators pair and propose to use a single hypernetwork-based one. This decreases the training cost and provides richer learning signal to the generator, making it possible to train directly on 1024^2 videos for the first time. We build our model on top of StyleGAN2 and it is just 5 resolution while achieving almost the same image quality. Moreover, our latent space features similar properties, enabling spatial manipulations that our method can propagate in time. We can generate arbitrarily long videos at arbitrary high frame rate, while prior work struggles to generate even 64 frames at a fixed rate. Our model achieves state-of-the-art results on four modern 256^2 video synthesis benchmarks and one 1024^2 resolution one. Videos and the source code are available at the project website: https://universome.github.io/stylegan-v.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

research
03/06/2023

MotionVideoGAN: A Novel Video Generator Based on the Motion Space Learned from Image Pairs

Video generation has achieved rapid progress benefiting from high-qualit...
research
10/23/2022

Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization

We introduce an approach to generating videos based on a series of given...
research
04/07/2022

Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

Videos are created to express emotion, exchange information, and share e...
research
09/15/2022

HARP: Autoregressive Latent Video Prediction with High-Fidelity Image Generator

Video prediction is an important yet challenging problem; burdened with ...
research
04/30/2021

A Good Image Generator Is What You Need for High-Resolution Video Synthesis

Image and video synthesis are closely related areas aiming at generating...
research
03/21/2020

Non-Adversarial Video Synthesis with Learned Priors

Most of the existing works in video synthesis focus on generating videos...
research
12/17/2019

Jointly Trained Image and Video Generation using Residual Vectors

In this work, we propose a modeling technique for jointly training image...

Please sign up or login with your details

Forgot password? Click here to reset