VideoComposer: Compositional Video Synthesis with Motion Controllability

06/03/2023
by   Xiang Wang, et al.
0

The pursuit of controllability as a higher standard of visual content creation has yielded remarkable progress in customizable image synthesis. However, achieving controllable video synthesis remains challenging due to the large variation of temporal dynamics and the requirement of cross-frame temporal consistency. Based on the paradigm of compositional generation, this work presents VideoComposer that allows users to flexibly compose a video with textual conditions, spatial conditions, and more importantly temporal conditions. Specifically, considering the characteristic of video data, we introduce the motion vector from compressed videos as an explicit control signal to provide guidance regarding temporal dynamics. In addition, we develop a Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs, with which the model could make better use of temporal conditions and hence achieve higher inter-frame consistency. Extensive experimental results suggest that VideoComposer is able to control the spatial and temporal patterns simultaneously within a synthesized video in various forms, such as text description, sketch sequence, reference video, or even simply hand-crafted motions. The code and models will be publicly available at https://videocomposer.github.io.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

page 14

page 15

page 16

research
04/06/2022

Video Demoireing with Relation-Based Temporal Consistency

Moire patterns, appearing as color distortions, severely degrade image a...
research
05/23/2023

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

This paper presents a controllable text-to-video (T2V) diffusion model, ...
research
07/16/2020

World-Consistent Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims for converting high-level semant...
research
08/12/2023

ModelScope Text-to-Video Technical Report

This paper introduces ModelScopeT2V, a text-to-video synthesis model tha...
research
04/13/2021

Dynamic Texture Synthesis by Incorporating Long-range Spatial and Temporal Correlations

The main challenge of dynamic texture synthesis lies in how to maintain ...
research
06/01/2023

Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance

Creating a vivid video from the event or scenario in our imagination is ...
research
07/01/2022

Motion Compensated Frequency Selective Extrapolation for Error Concealment in Video Coding

Although wireless and IP-based access to video content gives a new degre...

Please sign up or login with your details

Forgot password? Click here to reset