Make It Move: Controllable Image-to-Video Generation with Text Descriptions

12/06/2021
by   Yaosi Hu, et al.
0

Generating controllable videos conforming to user intentions is an appealing yet challenging topic in computer vision. To enable maneuverable control in line with user intentions, a novel video generation task, named Text-Image-to-Video generation (TI2V), is proposed. With both controllable appearance and motion, TI2V aims at generating videos from a static image and a text description. The key challenges of TI2V task lie both in aligning appearance and motion from different modalities, and in handling uncertainty in text descriptions. To address these challenges, we propose a Motion Anchor-based video GEnerator (MAGE) with an innovative motion anchor (MA) structure to store appearance-motion aligned representation. To model the uncertainty and increase the diversity, it further allows the injection of explicit condition and implicit randomness. Through three-dimensional axial transformers, MA is interacted with given image to generate next frames recursively with satisfying controllability and diversity. Accompanying the new task, we build two new video-text paired datasets based on MNIST and CATER for evaluation. Experiments conducted on these datasets verify the effectiveness of MAGE and show appealing potentials of TI2V task. Source code for model and datasets will be available soon.

READ FULL TEXT

page 6

page 7

page 8

page 12

page 13

page 14

page 15

page 16

research
10/06/2022

Text-driven Video Prediction

Current video generation models usually convert signals indicating appea...
research
07/06/2023

Synthesizing Artistic Cinemagraphs from Text

We introduce Text2Cinemagraph, a fully automated method for creating cin...
research
11/25/2019

GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

Human video motion transfer has a wide range of applications in multimed...
research
04/11/2022

Structure-Aware Motion Transfer with Deformable Anchor Model

Given a source image and a driving video depicting the same object type,...
research
05/22/2023

ControlVideo: Training-free Controllable Text-to-Video Generation

Text-driven diffusion models have unlocked unprecedented abilities in im...
research
07/04/2017

Skeleton-aided Articulated Motion Generation

This work make the first attempt to generate articulated human motion se...
research
10/22/2020

Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos

This paper proposes a vision-based method for video sky replacement and ...

Please sign up or login with your details

Forgot password? Click here to reset