RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces

07/02/2020 ∙ by Sebastien Ehrhardt, et al. ∙ 21

We present RELATE, a model that learns to generate physically plausible scenes and videos of multiple interacting objects. Similar to other generative approaches, RELATE is trained end-to-end on raw, unlabeled data. RELATE combines an object-centric GAN formulation with a model that explicitly accounts for correlations between individual objects. This allows the model to generate realistic scenes and videos from a physically-interpretable parameterization. Furthermore, we show that modeling the object correlation is necessary to learn to disentangle object positions and identity. We find that RELATE is also amenable to physically realistic scene editing and that it significantly outperforms prior art in object-centric scene generation in both synthetic (CLEVR, ShapeStacks) and real-world data (street traffic scenes). In addition, in contrast to state-of-the-art methods in object-centric generative modeling, RELATE also extends naturally to dynamic scenes and generates videos of high visual fidelity

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 18

page 19

page 20

page 21

page 22

page 23

page 24

Code Repositories

relate

Official PyTorch implementation of 'RELATE: Physically Plausible Multi-Object SceneSynthesis Using Structured Latent Spaces'.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.