Learned Equivariant Rendering without Transformation Supervision

11/11/2020
by   Cinjon Resnick, et al.
4

We propose a self-supervised framework to learn scene representations from video that are automatically delineated into objects and background. Our method relies on moving objects being equivariant with respect to their transformation across frames and the background being constant. After training, we can manipulate and render the scenes in real time to create unseen combinations of objects, transformations, and backgrounds. We show results on moving MNIST with backgrounds.

READ FULL TEXT

page 6

page 7

research
02/01/2021

Self-Supervised Equivariant Scene Synthesis from Video

We propose a self-supervised framework to learn scene representations fr...
research
04/01/2020

Future Video Synthesis with Object Motion Prediction

We present an approach to predict future video frames given a sequence o...
research
01/16/2021

Self-Supervised Representation Learning from Flow Equivariance

Self-supervised representation learning is able to learn semantically me...
research
12/20/2017

Self-Supervised Damage-Avoiding Manipulation Strategy Optimization via Mental Simulation

Everyday robotics are challenged to deal with autonomous product handlin...
research
05/31/2022

D^2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video

Given a monocular video, segmenting and decoupling dynamic objects while...
research
05/08/2014

Implementation And Performance Evaluation Of Background Subtraction Algorithms

The study evaluates three background subtraction techniques. The techniq...
research
08/18/2020

Self-supervised Sparse to Dense Motion Segmentation

Observable motion in videos can give rise to the definition of objects m...

Please sign up or login with your details

Forgot password? Click here to reset