LaMD: Latent Motion Diffusion for Video Generation

04/23/2023
by   Yaosi Hu, et al.
0

Generating coherent and natural movement is the key challenge in video generation. This research proposes to condense video generation into a problem of motion generation, to improve the expressiveness of motion and make video generation more manageable. This can be achieved by breaking down the video generation process into latent motion generation and video reconstruction. We present a latent motion diffusion (LaMD) framework, which consists of a motion-decomposed video autoencoder and a diffusion-based motion generator, to implement this idea. Through careful design, the motion-decomposed video autoencoder can compress patterns in movement into a concise latent motion representation. Meanwhile, the diffusion-based motion generator is able to efficiently generate realistic motion on a continuous latent space under multi-modal conditions, at a cost that is similar to that of image diffusion models. Results show that LaMD generates high-quality videos with a wide range of motions, from stochastic dynamics to highly controllable movements. It achieves new state-of-the-art performance on benchmark datasets, including BAIR, Landscape and CATER-GENs, for Image-to-Video (I2V) and Text-Image-to-Video (TI2V) generation. The source code of LaMD will be made available soon.

READ FULL TEXT

page 6

page 7

page 8

page 10

research
03/06/2023

MotionVideoGAN: A Novel Video Generator Based on the Motion Space Learned from Image Pairs

Video generation has achieved rapid progress benefiting from high-qualit...
research
01/18/2022

Autoencoding Video Latents for Adversarial Video Generation

Given the three dimensional complexity of a video signal, training a rob...
research
04/17/2023

Text2Performer: Text-Driven Human Video Generation

Text-driven content creation has evolved to be a transformative techniqu...
research
02/20/2023

STB-VMM: Swin Transformer Based Video Motion Magnification

The goal of video motion magnification techniques is to magnify small mo...
research
03/24/2018

VOS-GAN: Adversarial Learning of Visual-Temporal Dynamics for Unsupervised Dense Prediction in Videos

Recent GAN-based video generation approaches model videos as the combina...
research
01/08/2021

InMoDeGAN: Interpretable Motion Decomposition Generative Adversarial Network for Video Generation

In this work, we introduce an unconditional video generative model, InMo...
research
12/01/2022

VIDM: Video Implicit Diffusion Models

Diffusion models have emerged as a powerful generative method for synthe...

Please sign up or login with your details

Forgot password? Click here to reset