Click to Move: Controlling Video Generation with Sparse Motion

08/19/2021
by   Pierfrancesco Ardino, et al.
3

This paper introduces Click to Move (C2M), a novel framework for video generation where the user can control the motion of the synthesized video through mouse clicks specifying simple object trajectories of the key objects in the scene. Our model receives as input an initial frame, its corresponding segmentation map and the sparse motion vectors encoding the input provided by the user. It outputs a plausible video sequence starting from the given frame and with a motion that is consistent with user input. Notably, our proposed deep architecture incorporates a Graph Convolution Network (GCN) modelling the movements of all the objects in the scene in a holistic manner and effectively combining the sparse user motion information and image features. Experimental results show that C2M outperforms existing methods on two publicly available datasets, thus demonstrating the effectiveness of our GCN framework at modelling object interactions. The source code is publicly available at https://github.com/PierfrancescoArdino/C2M.

READ FULL TEXT

page 4

page 7

page 8

page 12

page 13

page 14

page 15

page 16

research
06/06/2023

Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions

We propose a novel unsupervised method to autoregressively generate vide...
research
12/20/2018

Animating Arbitrary Objects via Deep Motion Transfer

This paper introduces a novel deep learning framework for image animatio...
research
01/18/2022

Motion Inbetweening via Deep Δ-Interpolator

We show that the task of synthesizing missing middle frames, commonly kn...
research
09/11/2019

Specifying Object Attributes and Relations in Interactive Scene Generation

We introduce a method for the generation of images from an input scene g...
research
11/19/2021

Xp-GAN: Unsupervised Multi-object Controllable Video Generation

Video Generation is a relatively new and yet popular subject in machine ...
research
04/04/2023

HyperCUT: Video Sequence from a Single Blurry Image using Unsupervised Ordering

We consider the challenging task of training models for image-to-video d...
research
07/06/2021

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

How would a static scene react to a local poke? What are the effects on ...

Please sign up or login with your details

Forgot password? Click here to reset