iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

07/06/2021
by   Andreas Blattmann, et al.
1

How would a static scene react to a local poke? What are the effects on other parts of an object if you could locally push it? There will be distinctive movement, despite evident variations caused by the stochastic nature of our world. These outcomes are governed by the characteristic kinematics of objects that dictate their overall motion caused by a local interaction. Conversely, the movement of an object provides crucial information about its underlying distinctive kinematics and the interdependencies between its parts. This two-way relation motivates learning a bijective mapping between object kinematics and plausible future image sequences. Therefore, we propose iPOKE - invertible Prediction of Object Kinematics - that, conditioned on an initial frame and a local poke, allows to sample object kinematics and establishes a one-to-one correspondence to the corresponding plausible videos, thereby providing a controlled stochastic video synthesis. In contrast to previous works, we do not generate arbitrary realistic videos, but provide efficient control of movements, while still capturing the stochastic nature of our environment and the diversity of plausible outcomes it entails. Moreover, our approach can transfer kinematics onto novel object instances and is not confined to particular object classes. Project page is available at https://bit.ly/3dJN4Lf

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
05/10/2021

Stochastic Image-to-Video Synthesis using cINNs

Video understanding calls for a model to learn the characteristic interp...
research
06/21/2021

Understanding Object Dynamics for Interactive Image-to-Video Synthesis

What would be the effect of locally poking a static scene? We present an...
research
09/19/2022

T3VIP: Transformation-based 3D Video Prediction

For autonomous skill acquisition, robots have to learn about the physica...
research
08/19/2021

Click to Move: Controlling Video Generation with Sparse Motion

This paper introduces Click to Move (C2M), a novel framework for video g...
research
06/06/2023

Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions

We propose a novel unsupervised method to autoregressively generate vide...
research
11/25/2022

WALDO: Future Video Synthesis using Object Layer Decomposition and Parametric Flow Prediction

This paper presents WALDO (WArping Layer-Decomposed Objects), a novel ap...

Please sign up or login with your details

Forgot password? Click here to reset