Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

02/21/2022
by   Sihyun Yu, et al.
8

In the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7 trained on 128 frame videos of 128x128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method.

READ FULL TEXT

page 2

page 5

page 7

page 9

page 20

page 21

page 22

page 23

research
06/29/2022

3D-Aware Video Generation

Generative models have emerged as an essential building block for many i...
research
09/22/2017

Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks

Taking a photo outside, can we predict the immediate future, e.g., how w...
research
12/01/2022

VIDM: Video Implicit Diffusion Models

Diffusion models have emerged as a powerful generative method for synthe...
research
08/13/2020

Recurrent Deconvolutional Generative Adversarial Networks with Application to Text Guided Video Generation

This paper proposes a novel model for video generation and especially ma...
research
09/08/2016

Generating Videos with Scene Dynamics

We capitalize on large amounts of unlabeled video in order to learn a mo...
research
03/21/2022

Generative Adversarial Network for Future Hand Segmentation from Egocentric Video

We introduce the novel problem of anticipating a time series of future h...
research
05/08/2019

Automatic Video Colorization using 3D Conditional Generative Adversarial Networks

In this work, we present a method for automatic colorization of grayscal...

Please sign up or login with your details

Forgot password? Click here to reset