Disentangling Video with Independent Prediction

01/17/2019
by   William F. Whitney, et al.
0

We propose an unsupervised variational model for disentangling video into independent factors, i.e. each factor's future can be predicted from its past without considering the others. We show that our approach often learns factors which are interpretable as objects in a scene.

READ FULL TEXT

page 4

page 9

research
04/01/2020

Future Video Synthesis with Object Motion Prediction

We present an approach to predict future video frames given a sequence o...
research
04/20/2021

Learning Semantic-Aware Dynamics for Video Prediction

We propose an architecture and training scheme to predict video frames b...
research
08/22/2019

Compositional Video Prediction

We present an approach for pixel-level future prediction given an input ...
research
04/21/2022

Learning Future Object Prediction with a Spatiotemporal Detection Transformer

We explore future object prediction – a challenging problem where all ob...
research
11/03/2022

FactorMatte: Redefining Video Matting for Re-Composition Tasks

We propose "factor matting", an alternative formulation of the video mat...
research
09/01/2022

SketchBetween: Video-to-Video Synthesis for Sprite Animation via Sketches

2D animation is a common factor in game development, used for characters...
research
09/16/2021

DisUnknown: Distilling Unknown Factors for Disentanglement Learning

Disentangling data into interpretable and independent factors is critica...

Please sign up or login with your details

Forgot password? Click here to reset