Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

04/07/2022
by   Songwei Ge, et al.
3

Videos are created to express emotion, exchange information, and share experiences. Video synthesis has intrigued researchers for a long time. Despite the rapid progress driven by advances in visual synthesis, most existing studies focus on improving the frames' quality and the transitions between them, while little progress has been made in generating longer videos. In this paper, we present a method that builds on 3D-VQGAN and transformers to generate videos with thousands of frames. Our evaluation shows that our model trained on 16-frame video clips from standard benchmarks such as UCF-101, Sky Time-lapse, and Taichi-HD datasets can generate diverse, coherent, and high-quality long videos. We also showcase conditional extensions of our approach for generating meaningful long videos by incorporating temporal information with text and audio. Videos and code can be found at https://songweige.github.io/projects/tats/index.html.

READ FULL TEXT

page 1

page 3

page 12

page 13

page 15

page 16

page 24

research
03/21/2020

Non-Adversarial Video Synthesis with Learned Priors

Most of the existing works in video synthesis focus on generating videos...
research
12/29/2021

StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

Videos show continuous events, yet most - if not all - video synthesis f...
research
05/08/2017

You said that?

We present a method for generating a video of a talking face. The method...
research
10/05/2022

Phenaki: Variable Length Video Generation From Open Domain Textual Description

We present Phenaki, a model capable of realistic video synthesis, given ...
research
01/13/2022

MetaDance: Few-shot Dancing Video Retargeting via Temporal-aware Meta-learning

Dancing video retargeting aims to synthesize a video that transfers the ...
research
07/16/2020

World-Consistent Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims for converting high-level semant...
research
03/31/2020

Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data

The rapid increase in the amount of published visual data and the limite...

Please sign up or login with your details

Forgot password? Click here to reset