T3VIP: Transformation-based 3D Video Prediction

09/19/2022
by   Iman Nematollahi, et al.
6

For autonomous skill acquisition, robots have to learn about the physical rules governing the 3D world dynamics from their own past experience to predict and reason about plausible future outcomes. To this end, we propose a transformation-based 3D video prediction (T3VIP) approach that explicitly models the 3D motion by decomposing a scene into its object parts and predicting their corresponding rigid transformations. Our model is fully unsupervised, captures the stochastic nature of the real world, and the observational cues in image and point cloud domains constitute its learning signals. To fully leverage all the 2D and 3D observational signals, we equip our model with automatic hyperparameter optimization (HPO) to interpret the best way of learning from them. To the best of our knowledge, our model is the first generative model that provides an RGB-D video prediction of the future for a static camera. Our extensive evaluation with simulated and real-world datasets demonstrates that our formulation leads to interpretable 3D models that predict future depth videos while achieving on-par performance with 2D models on RGB video prediction. Moreover, we demonstrate that our model outperforms 2D baselines on visuomotor control. Videos, code, dataset, and pre-trained models are available at http://t3vip.cs.uni-freiburg.de.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

research
08/02/2020

Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

A key challenge for an agent learning to interact with the world is to r...
research
08/05/2021

SLAMP: Stochastic Latent Appearance and Motion Prediction

Motion is an important cue for video prediction and often utilized by se...
research
03/04/2019

VideoFlow: A Flow-Based Generative Model for Video

Generative models that can model and predict sequences of future events ...
research
07/06/2021

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

How would a static scene react to a local poke? What are the effects on ...
research
10/30/2017

Stochastic Variational Video Prediction

Predicting the future in real-world settings, particularly from raw sens...
research
07/22/2022

Egocentric scene context for human-centric environment understanding from video

First-person video highlights a camera-wearer's activities in the contex...
research
11/12/2019

Experience-Embedded Visual Foresight

Visual foresight gives an agent a window into the future, which it can u...

Please sign up or login with your details

Forgot password? Click here to reset