DeepAI
Log In Sign Up

Object-Centric Representation Learning with Generative Spatial-Temporal Factorization

11/09/2021
by   Li Nanbo, et al.
2

Learning object-centric scene representations is essential for attaining structural understanding and abstraction of complex scenes. Yet, as current approaches for unsupervised object-centric representation learning are built upon either a stationary observer assumption or a static scene assumption, they often: i) suffer single-view spatial ambiguities, or ii) infer incorrectly or inaccurately object representations from dynamic scenes. To address this, we propose Dynamics-aware Multi-Object Network (DyMON), a method that broadens the scope of multi-view object-centric representation learning to dynamic scenes. We train DyMON on multi-view-dynamic-scene data and show that DyMON learns – without supervision – to factorize the entangled effects of observer motions and scene object dynamics from a sequence of observations, and constructs scene object spatial representations suitable for rendering at arbitrary times (querying across time) and from arbitrary viewpoints (querying across space). We also show that the factorized scene representations (w.r.t. objects) support querying about a single object by space and time independently.

READ FULL TEXT VIEW PDF

page 7

page 16

page 17

page 20

page 21

page 22

page 23

page 24

11/13/2021

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

Learning object-centric representations of multi-object scenes is a prom...
06/07/2022

ObPose: Leveraging Canonical Pose for Object-Centric Scene Inference in 3D

We present ObPose, an unsupervised object-centric generative model that ...
12/31/2020

Language-Mediated, Object-Centric Representation Learning

We present Language-mediated, Object-centric Representation Learning (LO...
04/09/2021

GATSBI: Generative Agent-centric Spatio-temporal Object Interaction

We present GATSBI, a generative model that can transform a sequence of r...
10/05/2022

Differentiable Mathematical Programming for Object-Centric Representation Learning

We propose topology-aware feature partitioning into k disjoint partition...
10/28/2019

Entity Abstraction in Visual Model-Based Reinforcement Learning

This paper tests the hypothesis that modeling a scene in terms of entiti...
04/16/2019

Object-Oriented Dynamics Learning through Multi-Level Abstraction

Object-based approaches for learning action-conditioned dynamics has dem...

1 Introduction

Object-centric representation learning promises improved interpretability, generalization, and data-efficient learning on various downstream tasks like reasoning (e.g. Janner et al. (2019); Yang et al. (2020)) and planning (e.g. Mnih et al. (2015); Carlos et al. (2008); Zadaianchuk et al. (2021)). It aims at discovering compositional structures around objects from the raw sensory input data, i.e. a binding problem Greff et al. (2020), where the segregation (i.e. factorization) is the major challenge, especially in cases of no supervision. In the context of visual data, most existing focus has been on single-view settings, i.e. decomposing and representing 3D scenes based on a single 2D image Burgess et al. (2019); Greff et al. (2019); Locatello et al. (2020) or a fixed-view video Lin et al. (2020). These methods often suffer from single-view spatial ambiguities and thus show several failures or inaccuracies in representing 3D scene properties. It was demonstrated by Nanbo et al. Nanbo et al. (2020) that such ambiguities could be effectively resolved by multi-view information aggregation. However, current multi-view models are built upon a foundational static-scene assumption. As a result, they: 1) require static-scene data for training and 2) cannot handle well dynamic scenes where the spatial structures evolve over time. This greatly harms a model’s potentials in real-world applications.

In this work, we target an unexplored problem—unsupervised object-centric latent representation learning in multi-view-dynamic-scene scenarios. Despite the importance of the problem to spatial-temporal understanding of 3D scenes, solving it presents several technical challenges. Consider one particularly interesting scenario where both an observer (e.g. a camera) and the objects in the scene are moving at the same time. To aggregate 3D object information from two consecutive observations, an agent needs not only to handle the cross-view object correspondence problem Nanbo et al. (2020) but also to reason about the independent effects of the scene dynamics and observer motions. One can consider the aggregation as a process of answering two questions: “how much has an object really changed in the 3D space” and “what previous spatial unclarity can be clarified by the current view”. In this paper, we refer to the relationship between the scene spatial structures and the viewpoints as the temporal entanglement because the temporal dependence of them complicates the identification of the independent generative mechanism Schölkopf et al. (2021).

We introduce DyMON (Dynamics-aware Multi-Object Network), a unified unsupervised framework for multi-view object-centric representation learning. Instead of making a strong assumption of static scenes as that in previous multi-view methods, we only make two weak assumptions about the training scenes: i) observation sequences are taken at a high frame rate, and ii) there exists a significant difference between the speed of the observer and the objects (see Sec. 3). Under these two assumptions, in a short period, we can transition a multi-view-dynamic-scene problem to a multi-view-static-scene problem if an observer moves faster than a scene evolves, or to a single-view-dynamic-scene problem if a scene evolves faster than an observer moves. These local approximations allow DyMON to learn independently the generative relationships between scenes and observations, and viewpoints and observations during training, which further enable DyMON to address the problem of scene spatial-temporal factorization, i.e. solving the observer-scene temporal entanglement and scene object decomposition, at test time.

Through the experiments we demonstrate that: (i) DyMON represents the first unsupervised multi-view object-centric representation learning work in the context of dynamic-scene settings that can train and perform object-oriented inference on multi-view-dynamic-scene data (see Sec. 5). (ii) DyMON recovers the independent generative mechanism of an observer and scene objects from observations and permits querying predictions of scene appearances and segmentations across both space and time (see Sec. 5.1). (iii) As DyMON learns scene representations that are factorized in terms of objects, DyMON allows single-object manipulation along both the space (i.e. viewpoint) and time axis—e.g. replays dynamics of a single object without interferring the others (see Sec. 5.1).

2 Background

Object-centric Representations Consider object-centric representation inference as the inverse problem of an observation generation problem (i.e. the vision-as-inverse-graphics Yuille and Kersten (2006)

idea). In the forward process, i.e. observation generation, we have a scene well-defined by a set of parameter vectors

, where a specifies one and only one object in the scene. An observation of the scene , e.g. an image or an RGB image , can be taken only by a specified observer (often defined as ) which is independent of the scene in the forward problem, using a specific mapping . Assuming a deterministic process, an observation is generated as , where is often omitted in single-view scenarios (e.g. Burgess et al. (2019); Greff et al. (2019)). With the forward problem defined, we can describe the goal of learning an object-centric representation as inferring the intrinsic parameters of the objects that compose a scene based on the scene observation . In other words, computing a factorized posterior , even though it is computationally intractable. As the number of objects is unknown in the inverse problem, it is worth noting that i) is often set globally to be a sufficiently large number (greater than the actual number of objects) to capture all scene objects, and ii) we allow empty “slots”.

Temporal Entanglement The dynamic nature of the world suggests that the spatial configuration of a scene (denoted by ) and an observer are bound to the specific time that an observation is taken (i.e. ). Let  111We define

as a joint sample indicator that forbids independent sampling of the random variables wherein.

represent a data sample, e.g. a sequence or set of multi-view image observations, from dataset , where is the number of the images in the sample. Assuming is given in the data sample for now, i.e. focusing on the generative process only, we augment a scene data sample as . In general, we assume an independent scene-observer relation: but they nevertheless become dependent when the corresponding observation is given: . Under a static-scene assumption, we can treat an augmented data sample as where and are separable (i.e. can be sampled independently). In this case, to recover the independent generative mechanism (i.e. train a ) w.r.t. scenes and observers from data, GQN Eslami et al. (2018) and MulMON Nanbo et al. (2020) fix to and intervene on the viewpoints

. From a causal perspective, this can be seen as estimating

, where , implicitly under a causal model: . However, in dynamic settings, the same estimation, i.e. sampling independently of , is forbidden by the indicator. Intuitively, an observer cannot take more than one observations from different viewpoints at the same time . In this paper, we refer to this issue as temporal entanglement in view of the temporal implication of the indicator.

Figure 1: Top Left: Multi-view-dynamic-scene setup. with a time index superscript denotes the spatial configuration (e.g. position, orientation, etc.) of an observer at a specific time. We highlight one particular interesting, yet unexplored, scenario where both an observer and scene objects are moving at the same time—which entangles the independent effects of the observer’s and scene objects’ motions on an scene observation, an image sequence (see bottom left). A latent variable that is indexed by time describes the objects and their spatial configuration at a specific time (See Sec. 2 for detailed definition). Right: DyMON decouples the generative effects of observer motions and scene object motions and enables: 1) reconstruction and factorization of the observed views (see bottom right), and 2) novel-view appearance and decomposition prediction for arbitrary times—querying across both space and time (see top right).

3 DyMON

Our goal is to train a multi-view object-centric representation learning model that recovers the independent generative mechanism of scene objects and their motions and observer motions from dynamic-scene observations. In this section, we detail how DyMON addresses these two presented challenges: 1) temporal disentanglement (see Sec. 3.1), and 2) scene spatial factorization (see Sec. 3.2). We discuss the training of DyMON in Sec. 3.3.

3.1 Temporal Disentanglement

The key to resolving temporal entanglement, i.e. temporal disentanglement, is to enable sampling independently of , or independently of . This is seemingly impossible in the multi-view-dynamic-scene setting as it requires to fix either (static scene) or (single-view) respectively. In this paper, we make two assumptions about the training scenes to ensure the satisfaction of the aforementioned two requirements without violating the global multi-view-dynamic-scene setting. Let us first describe the dynamics of scenes and observers with two independent dynamical systems:

(1)

where and are the times that two consecutive observations were taken, and , or simply and , are the average velocities of scene objects and the observer within . Note that we use a to capture both the shape and pose information of an object. However, we do not consider shape changes in this work. With the dynamical systems defined, we introduce our assumptions (which defines a tractable subset of all possible situations) as:

  • (A1) The high-frame-rate assumption s.t. ,

  • (A2) The large-speed-difference assumption The data comes from one of two cases (SCFO: Slow Camera, Fast Objects or FCSO: Fast Camera Slow Objects), that satisfy: or , where computes a speed, and and are positive constants.

A1 allows us to assume a nearly static scene or a fixed viewpoint for a short period. Consider an example where we assume a static scene, i.e. , in , A1 essentially allows us to extract out of a joint sample as: . An intuitive way to define A2 is: or , which specify a large speed difference between scene speeds and observer speeds.

These two assumptions enable us to accumulate instant changes (velocities) on one variable (e.g. either or ) over a finite number of while ignoring the small changes of the other (assumed fixed). We then treat a slow-camera-fast-objects (i.e. SCFO) scenario, where , as an approximate single-view-dynamic-scene scenario, and a fast-camera-slow-objects (i.e. FCSO) scenario, where , an approximate multi-view-static-scene scenario. Either case allows us to resolve the temporal entanglement problem. Importantly, to answer the question: “is a given data sample an SCFO or FCSO sample”, we need to quantitatively specify the two assignment criteria and . However, a direct calculation of these two constants is often difficult and does not generalize as: i)

is not available in unsupervised scene representation learning data, and ii) the two constants vary across different datasets. In practice, we cluster the data samples into SCFO and FCSO clusters using only the viewpoint speed

, i.e. assuming for training (see Sec. 3.3). In testing, DyMON treats them equally.

3.2 Spatial Object Factorization

DyMON tackles scene spatial decomposition in a similar way to MulMON Nanbo et al. (2020) using a generative model and an inference model. The generative likelihood of a single image observation is modelled with a spatial Gaussian mixture Williams and Titsias (2004); Greff et al. (2017):

(2)

where indexes a pixel location ( in total) and RGB values (e.g. ) that pertain to an object

are sampled from a Gaussian distribution

whose mean is determined by the decoder network (defined in Sec. 2) with trainable parameter

and standard deviation

is globally set to a fixed value for all pixels. The mixing coefficients

capture the categorical probability of assigning a pixel

to an object (i.e. ). This imposes a competition over the objects as every pixel has to be explained by one and only one object in the scene.

DyMON adapts the cross-view inference module Nanbo et al. (2020) of MulMON to handle: i) the cross-view object correspondence problem, ii) recursive approximation of a factorized posterior, and iii) temporal evolution of spatial structures (which indicates the major difference between the inference modules of DyMON and MulMON). The decomposition and recursive approximation of the posterior is:

(3)

where denotes the approximate posterior to a subproblem w.r.t. an observation taken from viewpoint at time , and assumes a standard Gaussian for the scene prior

. The intuition is to treat a posterior inferred from previous observations as the new prior to perform Bayesian inference for a new posterior based on a new observation. We use

to denote the inferred scene representations after observing , i.e. a new posterior, and to denote the new prior before observing . Note that we can advance either regularly or irregularly. The single-view (or within-view) inference is handled by DyMON using iterative amortized inference Marino et al. (2018) with amortization function

(modelled with neural networks). Refer to Appendix B. for full details about the generative and inference models of DyMON.

3.3 Training

To enable DyMON to learn independently the generative relationships between scenes and observations, and viewpoints and observations during training, built upon MulMON’s architecture, we break a long moving-cam-dynamic-scene sequence into short sub-sequences (see Algo. 1) where sampling independently of is possible. Similar to MulMON Nanbo et al. (2020), we then train DyMON by maximizing the following objective function that linearly combines an evidence lower bound (abbr. ELBO) and the log likelihood (abbr. LL) of the querying views:

(4)

where and record the times when DyMON performs inference and interventions (i.e. viewpoint-queried generation) and is the weighting coefficient. We construct by sampling (either regularly or irregularly) with a random walk through

, where a uniform distribution

of an expected value () is used as the step distribution. As shown in Algo. 1, by varying the updating periods of and (denoted as and respectively), DyMON imitates the behaviours of a multi-view-static-scene model and a single-view-dynamic-scene model to handle the SCFO and FCSO samples respectively. In addition, using different for the SCFO and FCSO samples allows alternating the training focus between spatial reasoning (w.r.t. objects and viewpoints) and temporal updating.

Input: training data
Hyperparameters ;
  //
Initialize trained parameters , and latent prior ;
repeat
       Sample a sequence ;
        // (RGB images, viewpoints)
       if  then
             ;
              // , update more often
            
      else
             ;
              // , update more often
            
       ;
       ;
       while  do
             ;
             if  then
                   ;
                    // update
                  
            if  then
                   ;
                    // update
                   ;
                   ;
                    // sample updated
                   ;
                   for  do
                         ;
                         ;
                          // fix ,
                        
                  ;
                  
            ;
            
       ;
        //
       ;
      
until  converge;
Algorithm 1 DyMON Training Algorithm

Assignment Function and Batching As the samplers of T and Q behave differently for SCFO and FCSO data (see Algo. 1), we need to determine if a is an SCFO sample or an FCSO sample. Under A2, we consider any dataset consisting of only a mix of SCFO and FCSO samples (where a sample is a sequence of images). For a given dataset, we cluster all training samples of a dataset into two clusters w.r.t. the SCFO and FCSO scenarios. This then gives us an assignment function, (as shown in Algo. 1. In practice, to avoid breaking parallel training processes with loading SCFO and FCSO samples into the same batch, we assign the training data beforehand instead of assigning every data sample on the fly during training. This allows to batch FCSO or SCFO samples independently at every training step.

Figure 2: Qualitative results of spatial-temporal factorization. The GT rows show the true scene. The “MM” and “DM” entries are the scene re-rendered from the corresponding models, i.e. MulMON and DyMON respectively. The vertical row pairs show the results from viewpoint changes and the horizontal direction shows the results at different times. Note that we train MulMON and DyMON on different datasets as MulMON cannot train on multi-view-dynamic-scene datasets. We also visualize MulMON’s tendency of generating degenerated results along the temporal direction (marked with red arrows).

4 Related Work

Single-View-Static-Scene The breakthrough of unsupervised object discovery based on a primary scenario, i.e. a single-view-image setting, lays a solid foundation for the recent rise of unsupervised object-centric representation learning research. Built upon a VAE Kingma and Welling (2014), early success was shown by AIR Eslami et al. (2016) that searches for one object at a time on image regions. Because AIR and most of its successors (e.g. Kosiorek et al. (2018)) treat objects as flat pixel patches and the image generation process as “paste flat objects on canvas” using a spatial transformer Jaderberg et al. (2015), they often cannot summarize well scene spatial properties that are suitable for 3D manipulation: for example, they do not render smaller objects when the objects are “moved” further away from the camera. To overcome this, most recent advances Burgess et al. (2019); Greff et al. (2019); Lin et al. (2019); Engelcke et al. (2019); Locatello et al. (2020); Engelcke et al. (2021)

model a single 2D image with a spatial Gaussian mixture model 

Williams and Titsias (2004); Greff et al. (2017) that allows explicit handling of background and occlusions. Although these models suffer from single-view ambiguities like occlusions or optical illusions, they have the potential for attaining factorized representations of 3D scenes. Our work has close relationship to IODINE Greff et al. (2019): we handle the object-wise inference from an image observation at each time point using the iterative amortized inference Marino et al. (2018) design and capture the compositional generative process with a spatial Gaussian mixture model.

Multi-View-Static-Scene A natural way of resolving single-view ambiguities is to aggregate information from multi-view observations. Although multi-view scene explorations do not directly facilitate object-level 3D scene factorization, Eslami et al. Eslami et al. (2018) demonstrated that they do reduce the spatial uncertainty and enable explicit 3D knowledge evaluation—novel-view prediction. As combining GQN Eslami et al. (2018) and IODINE Greff et al. (2019), Nanbo et al. Nanbo et al. (2020) showed that MulMON effectively leverages multi-view exploration to extract accurate object representations of 3D scenes. However, like GQN, MulMON can only train on static-scene samples and thus does not generalize well to dynamic scenes ROOTS Chen et al. (2021) combines GQN and AIR’s merits to perform multi-view-static-scene object-centric representation learning whereas it requires camera intrinsic parameters to overcome AIR’s deficiency of 3D scene learning — it is thus camera-dependent hence less general. In our work, we propose DyMON as an extension of MulMON to dynamic scenes and a unified model for unsupervised multi-view object-centric representation learning.

Single-View-Dynamic-Scene A line of unsupervised scene object-centric representation learning research was established on the single-view-dynamic-scene setting Hsieh et al. (2018); Kosiorek et al. (2018); Jaques et al. (2020)

, where they explicitly model and represent object dynamics based on video observations. However, as most of these works employ a similar image composition design to AIR, they deal with only flat 2D objects that are similar to MNIST digits and thus cannot model 3D spatial properties. A closely-related work is that of Lin et al. 

Lin et al. (2020), i.e. GSWM, where they modelled relative depth information and pair-wise interactions of 3D object patches. In our work, the spatial-temporal factorization allows us to show the dynamics and depths of the objects from different viewpoints at different times.

Other Related Work As a multi-view-dynamic-scene representation learning framework, T-GQN Singh et al. (2019) represents the most closely-related work to ours. It models the spatial representation learning at each time step as a stochastic process (SP) and transitions between these time-stamped SPs with a state machine. However, a notable distinction between the problems that T-GQN and DyMON are targeting based on that: 1) T-GQN does not attain object-level scene factorization and 2) a typical T-GQN requires multi-view observations at each time step (as so-called “context”) to perform spatial learning so as to get rid of the temporal entanglement

problem (which has been the core focus of our work). Our work is essentially dealing with disentangled representation learning problems, which are often formulated under the frameworks of causal inference 

Pearl and others (2009); Suter et al. (2019); Schölkopf et al. (2021) and independent component analysis (abbr. ICA) Hyvärinen and Pajunen (1999); Hyvarinen and Morioka (2016). Unlike traditional disentanglement representation learning works (e.g. Higgins et al. (2017); Kim and Mnih (2018); Locatello et al. (2019)) that aims at feature-level disentanglement, in this work, we handle not only the object-level disentanglement that resides in the object-centric representation learning research, but also the time-dependent scene-observer disentanglement problem.Recent trend of neural radiance field (e.g. Mildenhall et al. (2020); Martin-Brualla et al. (2021); Pumarola et al. (2020)) are relevant to our work in the sense of 3D scene representations using multi-view images. However, from an vision-as-inverse-graphics Yuille and Kersten (2006) perspective, we do not consider them scene understanding models as they only aim to memorize the volumetric structure of a single scene during “training” thus cannot perform representation inference for unseen scenes.

5 Experiments

We used two simulated multi-view-dynamic-scene synthetic datasets, namely DRoom and MJC-Arm, and a real-world dataset, namely CubeLand (see Appendix C.3 for details), in this work. We conducted quantitative analysis on DRoom and show qualitative results on the other two datasets. The DRoom dataset consists of five subsets (including both training and testing sets): one subset (denoted as DR0-) with zero object motion (multi-view-static-scene data), one subset (denoted as DR0-) with zero camera motion (single-view-dynamic-scene data), and three multi-view-dynamic-scene subsets of increasing speed difference levels from 1 to 3 (denoted as DR-Lvl.). Each of the five subsets consists of around training sequences ( frames of RGB images per sequence) and testing sequences ( frames from different views, i.e. images). Although DyMON’s focus is on a more general problem, we nevertheless compare it against two recent and specialized unsupervised object-centric representation learning methods, i.e. GSWM Lin et al. (2020), and MulMON Nanbo et al. (2020), in two respective settings: single-view-dynamic-scenes, and multi-view-static-scenes. All models were trained with different random seeds for quantitative comparisons. Refer to our supplementary material for full details on experimental setups, and ablation studies and more qualitative results.

5.1 Space-Time Querying

The recovery of the independent generative mechanism permits DyMON to make both viewpoint-queried and time-queried predictions, i.e. querying across space and times, of scene appearances and segmentations using the inferred scene representations, which enables the below two demonstrations:

Novel-view Prediction at Arbitrary Times Recall that a scene observation is the generative product of a specific scene (composed by objects) and observer at a specific time with a well-defined generative mapping, i.e. (see sec. 2). Like previous multi-view object-centric representation learning models (e.g. MulMON Nanbo et al. (2020)), we query from an arbitrary viewpoint w.r.t. a scene of interest by fixing and manually setting the viewpoint to arbitrary configurations. Similarly, we can query about the spatial state of a dynamic scene at time from a specific viewpoint by fixing the viewpoint and manually inputting at arbitrary times to the generative function. We trained a DyMON on the DR-Lvl.3 data and show qualitatively the prediction results that are queried by space-time tuples in Figure 2.

Dynamics Replay of Scenes & Objects From Arbitrary Viewpoints In this experiment, we give DyMON a sequence of image observations of a dynamic scene as input, and have it replay the dynamics from a novel viewpoint using the scene representations it infers from the observations. This is done by fixing the to the desired values and querying about consecutive times. As the inferred scene representations are factorized in terms of objects, we show in Figure 3 (left) that, besides the complete scene dynamics, DyMON also allows to replay the dynamics of a single object independently of the others. We present the qualitative results on the MJC-Arm datasets in Figure 3 (right) where one can see that DyMON not only replays object dynamics as global position changes, it also captures object local motions.

Figure 3: Left: DyMON performing dynamics replays on the DRoom dataset, where the first row is the observation sequence input to DyMON, second and third rows show replays of the scene dynamics (all objects’ original motions) and object dynamics (just the foreground green ball moves) respectively from an arbitrary viewpoint . Right: DyMON replays local motions of robot arm from an arbitrary viewpoint (top: observation, middle: reconstruction, bottom: replay from a higher viewpoint).

Dynamics On Real-World Data To demonstrate that our model has the potential for real-world applications, we conduct experiments and show qualitative results on real images (i.e. CubeLand data). We refer the readers to Appendix D.4 for the results.

5.2 Versatile Evaluation

DyMON is designed to handle object-centric representation learning in a general setting—multi-view-dynamic-scenes. In this section, we experiment to evaluate how well DyMON handles the specialized settings.

MSE mIoU
Models Obs.Rec. Nv.Obs. Obs.Seg. Nv.Seg.
MulMON
DyMON
(a) DyMON vs. Multi-View-Dynamic-Scenes
MSE mIoU
Models Obs.Rec. Nv.Obs. Obs.Seg. Nv.Seg.
MulMON
DyMON
(b) DyMON vs. Multi-View-Static-Scenes
MSE mIoU
Models Obs.Rec. Obs.Seg.
GSWM
DyMON
(c) DyMON vs. Single-View-Dynamic-Scenes
Table 1: Quantitative comparisons of DyMON and two baseline models, i.e. GSWM and MulMON, in handling scenarios that the baseline models are specialized at. The models in table (a) are trained and tested on the DR0- data, and those in (b) and (c) are trained and tested on the DR0- data. “Obs.” tags reconstructions and segmentations that are computed for the observations and “Nv.” tags those from novel viewpoints. Mean stddev for 3 training seeds. indicates higher is better and indicates the opposite.

DyMON vs. Dynamic Scenes We first evaluate DyMON’s performance in the multi-view-dynamic-scene setting in comparison to MulMON. MulMON also learns the independent generative mechanism of scene objects and observer, but with a strict static-scene constraint. Note that both DyMON and MulMON permit novel-view predictions of scene appearances and segmentations, this allows explicit quantification of the correctness and accuracy of the inferred scene representations. We use a mean-squared-error (MSE) measure and a mean-intersection-over-union score (mIoU) measure. We train DyMON on the DR-Lvl.3 subset and MulMON on the DR0- subset (because it is UNABLE to train on dynamic-scene data) and conduct comparison across the three DRoom dynamic-scene subsets (i.e. DR-Lvl.). Table 0(a) shows that, although we train MulMON on a more strict dataset, i.e. the DR0- dataset, DyMON still outperforms MulMON on almost all the various indicators. We show the qualitative comparison results in Figure 2 and observe that MulMON’s performance declines along the temporal axis when large object motions appear. As neither DyMON nor MulMON impose any orders for object discovery, we used the Hungarian matching algorithm to find the best match that maximizes the mIoU score to handle the bipartite matching between the output and the Ground-truth masks.

DyMON vs. Static Scenes We evaluate how well it handles multi-view-static-scene scenarios in comparison with a specialized model, i.e. MulMON. We train and test both DyMON and MulMON on the DR0- subset w.r.t. reconstructions and segmentations of both the observed and unobserved views. Table 0(b)

summarizes the results. They show that DyMON can handle this strict constraint setting, even though it exhibits a slight performance gap compared with the specialized model. Also, it is worth noting that DyMON and MulMON produce high variances in segmentations. One possible reason is that both MulMON and DyMON employ stochastic parallel inference mechanisms that can sometimes infer duplicate latent representations and harm segmentations 

Nanbo and Fisher (2021). This experiment along with the DyMON-versus-dynamics-scenes experiment provides useful guidance for model selection in multi-view applications—use a specialized model in a well-controlled environment and DyMON to handle complex scenarios.

Figure 4: Left: Qualitative comparisons of DyMON and GSWM on reconstructing the DR0- scenes. The GT rows show the actual observations of a dynamic scene, and the “DM” and “GSWM” rows show observation reconstruction results of DyMON and GSWM, respectively.

DyMON vs. Fixed-View Observations of Dynamic Scenes We assessed DyMON’s performance on handling single-view-dynamic-scene observations by comparing it with GSWM Lin et al. (2020), which is a specialized object-centric representation model for this specific setting, although it is unable to achieve pixel-level segmentation. We train both DyMON and GSWM on the DR0- subset and measure the reconstruction quality of the observations. Table 0(c) shows that DyMON not only outperforms GSWM in observation reconstruction, but it also permits pixel-wise segmentation which the specialized model cannot. The qualitative results in Figure 4

show that GSWM learns better object appearances (especially for textures) than DyMON, whereas DyMON learns more accurate scene dynamics than GSWM. This is understandable as GSWM models object dynamics explicitly, which introduces risks of overfitting the observed motions. DyMON supports well temporal interpolations, i.e. dynamics replays (as shown in Figure 

3 and 4), but it does not model the object dynamics nor interactions explicitly like GSWM. As a result, it does not provide readily extrapolatable features along the time (or dynamics) axis for predicting into the future.

DyMON vs. T-GQN T-GQN Singh et al. (2019) is a closely related work as it targets unsupervised scene representation learning in the multi-view-dynamic-scene settings, even though it does not attain object-centric factorization in the latent space. Although T-GQN requires multi-view observations at each time step (as “context” information) to sidestep the temporal entanglement issue, we nevertheless train it on our DRoom data and show that it fails to represent the DRoom scenes (see Appendix D.3 for the results and discussions).

6 Conclusion

We have presented Dynamics-aware Multi-Object Network (DyMON), a method for learning object-centric representations in a multi-view-dynamic-scene setting. We have made two weak assumptions that allows DyMON to recover the independent generative mechanism of observers and scene objects from both training and testing multi-view-dynamic-scene data—achieving spatial-temporal factorization. This permits querying the predictions of scene appearances and segmentations across both space and time. As this work focuses on representing the spatial scene configurations at every specific time point, i.e. DyMON does not model dynamics explicitly so it cannot predict the future evolution of scenes, which leaves space for future exploration.

The first authors would like to acknowledge the School of Informatics, the University of Edinburgh for providing his PhD scholarship. This research is partly supported by the Trimbot2020 project, which is funded by the European Union Horizon 2020 programme. The authors would like to thank Prof. C. K. I. Williams and Cian Eastwood for valuable discussions.

References

  • [1] C. P. Burgess, L. Matthey, N. Watters, R. Kabra, I. Higgins, M. Botvinick, and A. Lerchner (2019) Monet: unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390. Cited by: §1, §2, §4.
  • [2] D. Carlos, A. Cohen, and M. L. Littman (2008)

    An object-oriented representation for efficient reinforcement learning

    .
    In

    International Conference on Machine Learning

    ,
    pp. 240–247. Cited by: §1.
  • [3] C. Chen, F. Deng, and S. Ahn (2021) Object-centric representation and rendering of 3d scenes. JMLR. Cited by: §4.
  • [4] (2019) CLEVR Blender Environment, BSD licencse. Note: https://github.com/facebookresearch/clevr-dataset-gen(Accessed: 2021-06-02) Cited by: C.1 DRoom (DynamicRoom).
  • [5] (2019) DeepMind MultiObject Dataset, Apache-2.0 licencse. Note: https://github.com/deepmind/multi_object_datasets(Accessed: 2021-06-02) Cited by: C.1 DRoom (DynamicRoom).
  • [6] M. Engelcke, O. P. Jones, and I. Posner (2021) GENESIS-v2: inferring unordered object representations without iterative refinement. arXiv preprint arXiv:2104.09958. Cited by: §4.
  • [7] M. Engelcke, A. R. Kosiorek, O. P. Jones, and I. Posner (2019) GENESIS: generative scene inference and sampling with object-centric latent representations. In International Conference on Learning Representations, Cited by: §4.
  • [8] S. A. Eslami, N. Heess, T. Weber, Y. Tassa, D. Szepesvari, and G. E. Hinton (2016) Attend, infer, repeat: fast scene understanding with generative models. In Advances in Neural Information Processing Systems, pp. 3225–3233. Cited by: §4.
  • [9] S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, et al. (2018) Neural scene representation and rendering. Science 360 (6394), pp. 1204–1210. Cited by: §2, §4.
  • [10] K. Greff, R. L. Kaufman, R. Kabra, N. Watters, C. Burgess, D. Zoran, L. Matthey, M. Botvinick, and A. Lerchner (2019) Multi-object representation learning with iterative variational inference. In Proceedings of the 36th International Conference on Machine Learning, pp. 2424–2433. Cited by: §1, §2, §4, §4, B.2 Model implementation.
  • [11] K. Greff, S. Van Steenkiste, and J. Schmidhuber (2017)

    Neural expectation maximization

    .
    In Advances in Neural Information Processing Systems, pp. 6691–6701. Cited by: §3.2, §4.
  • [12] K. Greff, S. van Steenkiste, and J. Schmidhuber (2020) On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208. Cited by: §1.
  • [13] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner (2017) Beta-vae: learning basic visual concepts with a constrained variational framework.. In International Conference on Learning Representations, Cited by: §4.
  • [14] J. Hsieh, B. Liu, D. Huang, L. F. Fei-Fei, and J. C. Niebles (2018) Learning to decompose and disentangle representations for video prediction. In Advances in Neural Information Processing Systems, pp. 517–526. Cited by: §4.
  • [15] A. Hyvarinen and H. Morioka (2016)

    Unsupervised feature extraction by time-contrastive learning and nonlinear ica

    .
    In Advances in Neural Information Processing Systems, Cited by: §4.
  • [16] A. Hyvärinen and P. Pajunen (1999) Nonlinear independent component analysis: existence and uniqueness results. Neural networks 12 (3), pp. 429–439. Cited by: §4.
  • [17] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu (2015) Spatial transformer networks. In Advances in Neural Information Processing Systems, pp. 2017–2025. Cited by: §4.
  • [18] M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu (2019) Reasoning about physical interactions with object-oriented prediction and planning. In International Conference on Learning Representations, Cited by: §1.
  • [19] M. Jaques, M. Burke, and T. Hospedales (2020) Physics-as-inverse-graphics: unsupervised physical parameter estimation from video. In International Conference on Learning Representations, Cited by: §4.
  • [20] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick (2017) CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. In CVPR, Cited by: C.1 DRoom (DynamicRoom).
  • [21] H. Kim and A. Mnih (2018) Disentangling by factorising. In International Conference on Machine Learning, pp. 2649–2658. Cited by: §4.
  • [22] D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. In International Conference on Learning Representations, Cited by: §4.
  • [23] A. Kosiorek, H. Kim, Y. W. Teh, and I. Posner (2018) Sequential attend, infer, repeat: generative modelling of moving objects. In Advances in Neural Information Processing Systems, pp. 8606–8616. Cited by: §4, §4.
  • [24] Z. Lin, Y. Wu, S. Peri, B. Fu, J. Jiang, and S. Ahn (2020) Improving generative imagination in object-centric world models. In International Conference on Machine Learning, Cited by: §1, §4, §5.2, §5.
  • [25] Z. Lin, Y. Wu, S. V. Peri, W. Sun, G. Singh, F. Deng, J. Jiang, and S. Ahn (2019) SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition. In International Conference on Learning Representations, Cited by: §4.
  • [26] F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem (2019)

    Challenging common assumptions in the unsupervised learning of disentangled representations

    .
    In international conference on machine learning, pp. 4114–4124. Cited by: §4.
  • [27] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf (2020) Object-centric learning with slot attention. In Advances in Neural Information Processing Systems, Cited by: §1, §4.
  • [28] J. Marino, Y. Yue, and S. Mandt (2018) Iterative amortized inference. In International Conference on Machine Learning, pp. 3403–3412. Cited by: §3.2, §4.
  • [29] R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth (2021) NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR, Cited by: §4.
  • [30] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng (2020) Nerf: representing scenes as neural radiance fields for view synthesis. In

    European Conference on Computer Vision

    ,
    pp. 405–421. Cited by: §4.
  • [31] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. Cited by: §1.
  • [32] L. Nanbo, C. Eastwood, and R. B. Fisher (2020) Learning object-centric representations of multi-object scenes from multiple views. In Advances in Neural Information Processing Systems, Cited by: §1, §1, §2, §3.2, §3.2, §3.3, §4, §5.1, §5.
  • [33] L. Nanbo and R. B. Fisher (2021)

    Duplicate latent representation suppression for multi-object variational autoencoders

    .
    In BMVC, Cited by: §5.2.
  • [34] J. Pearl et al. (2009) Causal inference in statistics: an overview. Statistics surveys 3, pp. 96–146. Cited by: §4.
  • [35] A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer (2020) D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv preprint arXiv:2011.13961. Cited by: §4.
  • [36] B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio (2021) Toward causal representation learning. Proceedings of the IEEE 109 (5), pp. 612–634. Cited by: §1, §4.
  • [37] G. Singh, J. Yoon, Y. Son, and S. Ahn (2019) Sequential neural processes. In Advances in Neural Information Processing Systems, Cited by: §4, §5.2.
  • [38] R. Suter, D. Miladinovic, B. Schölkopf, and S. Bauer (2019) Robustly disentangled causal mechanisms: validating deep representations for interventional robustness. In International Conference on Machine Learning, pp. 6056–6065. Cited by: §4.
  • [39] E. Todorov, T. Erez, and Y. Tassa (2012) MuJoCo: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. External Links: Document Cited by: C.2 MJC-Arm (Mujoco-Arm).
  • [40] N. Watters, L. Matthey, C. P. Burgess, and A. Lerchner (2019) Spatial broadcast decoder: a simple architecture for learning disentangled representations in vaes. arXiv preprint arXiv:1901.07017. Cited by: B.2 Model implementation.
  • [41] C. K. I. Williams and M. K. Titsias (2004) Greedy learning of multiple objects in images using robust statistics and factorial learning. Neural Computation 16 (5), pp. 1039–1062. Cited by: §3.2, §4.
  • [42] J. Yang, J. Mao, J. Wu, D. Parikh, D. D. Cox, J. B. Tenenbaum, and C. Gan (2020) Object-centric diagnosis of visual reasoning. arXiv preprint arXiv:2012.11587. Cited by: §1.
  • [43] A. Yuille and D. Kersten (2006) Vision as bayesian inference: analysis by synthesis?. Trends in cognitive sciences. Cited by: §2, §4.
  • [44] A. Zadaianchuk, M. Seitzer, and G. Martius (2021) Self-supervised visual reinforcement learning with object-centric representations. In International Conference on Learning Representations, Cited by: §1.

A. Algorithms

A.1 Iterative inference algorithm

Input: observation , viewpoint , latent Gaussian parameters
ModelParameters , and the number of single-view iterations
Initialize
for  to  do
        ;
         // sample from a prior---make a guess
        ;
         // render and verify
        ;
        ;
         // refine and then repeat (until )
       
Output
Algorithm 2 Iterative Inference Algorithm

A.2 Testing algorithm

Input: Trained parameters , and latent Gaussian parameters
Initialize ;
while Access  do
        ;
        Output ;
       
Algorithm 3 DyMON Testing Algorithm

B. Implementation Details

B.1 Training configurations

We show the training configurations used in this work in Table 2.

Type the trainings of DyMON, MulMON, GSWM
Optimizer Adam
Initial learning rate
Learning rate at step
Total gradient steps for DyMON vs. GSWM, for DyMON vs. MulMON
Batch size 2 ()
number of GPU/per training 1 ()
* the same scheduler as the original GQN except for faster attenuation
Table 2: Training Configurations

B.2 Model implementation

We show the designs of the generative mapping function and the refinement function in Table 3 and 4 respectively. After obtaining a set of RGBM outputs from this function, i.e. (see Table 3), we render (i.e. compose) an image as: , where ,

Table 3: Generator function Parameters Type Channels (out) Activations. Descriptions (projection) Input Linear 256 Relu Linear Linear (rendering) Input Broadcast +2 * Broadcast to grid Conv 32 Relu Conv 32 Relu Conv 32 Relu Conv 32 Relu Conv 4 Linear RGBM: rgb

+ mask logits

: the dimension of a latent representation, set to 16 for all experiments : the dimension of a viewpoint vector, set to for all experiments *: see spatial broadcast decoder [40] Stride set for all Convs.
Table 4: Refinement Network Parameters Type Channels (out) Activations. Descriptions Input 17 * Auxiliary inputs Conv 32 Relu Conv 32 Relu Conv 64 Relu Conv 64 Relu Flatten Linear 256 Relu Linear 128 Linear Concat 128+4* LSTMCell/GRUCell 128 Linear 128 Linear output : the dimension of a latent representation, set to 16 for all experiments Stride set for all Convs. * see IODINE[10] for details LSTMCell/GRUCell channels: the dimensions of the hidden states

C. Datasets

C.1 DRoom (DynamicRoom)

Simulation Environment We created the DRoom simulation on the top of the CLEVR Blender environment [20, 4]. Like other multi-object datasets [5], we initialized every sequence by randomly selecting and placing 2-5 scene objects in a simulated room (with background and walls specified). These objects are randomized in terms of shapes (incl. deformations, sizes), colors, and textures. Under the Blender physics engine settings, we enabled foreground objects’ movements by setting their dynamics status to “active” and disabled the background objects’ (i.e. walls and ground’s) movements by setting their dynamics status to “passive”. We then created a centrifugal force field within a fixed center and range on the ground across all DRoom datasets. In this work, we sample the magnitude of the force using: , which allows us to simulate scene object motions of different speeds by inputing different selection categorical probability . Moreover, we enabled object collisions to simulate scenes with rather complex object dynamics. The control of the observer (an RGB camera) motion is independent of the scene objects. We consider an observer or camera performing random walks on the surface of a dome (top half of a sphere) whose center aligns with the center of the ground—we randomly initialize the starting position of a camera and randomly sample its next move. Note that as the camera can only move on the dome (with a fixed radius ), we can use and , i.e. the azimuth and elevation of the camera, to represent a camera location. We sample the increment and independently from: and , which suggests that we can control the speed of the camera by inputting different and .

Figure 5: Left: DRoom simulation environment setup where yellow rings denote the force fields. Right: One fast-camera-slow-object (FCSO) sample (top row) and slow-camera-fast-object (SCFO) sample (bottom row). Both are randomly selected from the DR-Lvl. dataset.

Dataset We rendered all scenes using a resolution of for frames (4-second motions)—record 40 images with their corresponding viewpoints , where we represent the viewpoints using their 3-D Cartesion coordinates. The sampler specifications, i.e. the categorical distributions , used to generate the five DRoom subsets are listed in Table 5. As discussed in Sec.3.3, we clustered all the data samples based on their average camera speeds across each sequence to assign them into the FCSO and SCFO partitions. We visualize the clustering results for DR-Lvl. in Figure 6

Table 5: DRoom Generator Specs Force Magnitude Camera Random Walk Next Move Subsets (constant in its range) (for both and ) DR0- DR0- DR-Lvl. FCSO SCFO DR-Lvl. FCSO SCFO DR-Lvl. FCSO SCFO
Figure 6: Visualization of the data assignment results on the DR-Lvl. datasets.

C.2 MJC-Arm (Mujoco-Arm)

Simulation Environment The environment is built with MuJoCo physics simulator [39]

, and the Franka Emika robot arm with a Barret hand attached it the main scene object. The arm has 7 degrees of freedom and the joints of robotic hand are fixed during the data generation. 8 different collision-free robot arm motion trajectories are pre-defined, and each has unique initial and target joint configuration. Every joint is controlled in the position-derivative manner with a constant velocity, which is the product of the nominal velocity and the sampled weight. The nominal velocities for all 7 arm joints (from base to end-effector) are

, which are related to the link lengths of the robot arm. The joint velocity weights for FCSO and SCFO data trials are sampled from and . We also introduced a moving ball with random fixed direction and constant weighted velocity in the simulation. The control of the RGB camera is the same as introduced in the former section, with a fixed point of view towards the base link of the robot arm.

Figure 7: Left: Mujoco simulation environment. Right: One fast-camera-slow-object (FCSO) sample (top row) and slow-camera-fast-object (SCFO) sample. Both are randomly selected from the MJC-Arm dataset.

Dataset For each data sample, the scenes are rendered with resolution at 10Hz for 4 seconds (40 frames per sample). At the beginning of every trail, the textures of the robot arm and the moving ball are randomly selected from a colour set. The robot arm is initialised with the starting pose of the randomly selected motion trajectory.

C.3 Real-world aataset (CubeLand)

Figure 8: CubeLand data-collection platform.

Data-collection Environment We created CubeLand in a controlled real-world environment. Four cubes of different colours (i.e., red, blue, green and yellow) were placed on a table. To avoid glare, reflections and unnecessary background clutter, the surface of the table was made white by designing a bicolour data collection environment. A camera was mounted on the end effector of Franka (a robotic arm with 7 D.O.F.) as shown in Figure 8. The end effector has a fixed motion, i.e., it only rotates back and forth 120 degrees with no translation motion involved. The cubes were taped with threads at the bottom to move them freely and randomly. Moreover, the simulations had two configurations, i.e., slow camera, fast objects (SCFO) and fast camera, slow objects (FCSO) (see Figure 9). In the first configuration, the speed of the rotation of the end effector was 1.67 rpm (10 degrees per second) while the objects were manually pulled and thrown back into the scene at an arbitrary faster speed. In the latter configuration, the speed of the rotation of the end effector was set to be 4.17 rpm (25 degrees per second) whereas the objects were pulled and pushed by hand back into the scene at a slower rate. The height of the camera is 14.5 cm and the radius this assembly (centre of the end effector to the camera) spans is 19.5 cm.

Figure 9: CubeLand data samples. Top: a fast camera, slow objects (FCSO) data sample. Bottom: a slow camera, fast objects (SCFO) data sample.

Dataset All the frames collected were initially 480x480. During the post-processing steps, these frames were resized to 64x64 after applying a median filter (the centre of the kernel is replaced by the median of all the neighbouring pixels) of kernel size 9. Overall, 100 sequences of 50 frames each were extracted. Furthermore, each of the viewpoints was converted into 3D cartesian coordinates. The classification between SCFO and FCSO is solely based on the rotations per minute of the end effector.

D. Additional Results

D.1 Assumption Validation

As discussed in Sec.3.1. of the main paper, the training of DyMON on multi-view-dynamic-scene is based upon two assumptions that favor high frame-rate image sequences and large difference between the speeds of an observer and scene objects. In this experiments, as we know that the average speed differences of DR-Lvl. are in an ascending order, we can thus assess the robustness of DyMON against our assumptions. We trained DyMON on the DR-Lvl. training sets respectively and then evaluated their performance on space-time-queried prediction of scene appearances on the DR-Lvl. test sets. We visualize the MSE as a function of increased levels of speed differences in Figure 10.

Figure 10: The space-time-queried scene appearance prediction performance comparison between three DyMONs that are trained on three levels of scene-observer speed differences, i.e. DR-Lvl., respectively. Left: Averaged MSE achieved by the three models on three DRoom testing sets, i.e. the testing sets of DR-Lvl.. Right: The performance of the three models on each of the three testing sets.

As shown, 1) there is no significant performance drops across different training and test sets, 2) the faster the observer speeds and the scene speeds, the better the models perform. This holds for both training (see the overall performance on the left Figure) and testing (see a model’s testing performance against different test sets on the right Figure). These supports our claims about DyMON’s robustness against complex and potentially dynamic environments.

D.2 Ablation Study

Figure 11: Ablation study results. Top: Space-time queried view synthesis MSE vs. nested and . Bottom left: MSE vs. different (in space). Bottom middle: MSE vs. different (MSE computed by averaging across different ). Bottom right: MSE vs. different (MSE computed by averaging across different ).

We highlight two hyperparameters that play significant roles in the training of DyMON: 1) the updating periods of and , i.e. and , 2) weighting coefficient of viewpoint-queried generative log likelihood . We varied these two groups of parameters and visualized their influences on DyMON—similar to Sec.D.1, we measure DyMON’s novel-view synthesis performance at every time point and visualize them as a function of these hyperparameters. We varied and with values that are selected from discrete sets and , this allows us to show the joint effects of these two updating periods in a grid (see top half of Figure 11). To analyze the independent effects of and , we “squeezed” the grid by computing the MSE averaged over the axes and axes of the grid (see bottom right two plots of Figure 11 for the results). One can see that a short updating period for is preferred as this allows to capture more detailed scene object motions, while the selection of is relative subtler. One might run pre-analysis before training, e.g. visually look several sequences, to select a better . Similarly, we varied by setting its values to 0.5, 1.0, and 2.0 respectively and we show the results in the bottom left of Figure 11.

D.3 T-GQN Results

We used the official implementation of T-GQN (https://github.com/singhgautam/snp) and trained a T-GQN on the DR-Lvl.3 data. Although the training has converged (see Figure 12), we observe that it fails to represent the underlying 3D scenes (see Figure 13) and training T-GQN with a posterior dropout, i.e. T-GQN-PD, does not fix the issue. We speculate that this is because it lacks multiple views at each time steps to resolve the temporal entanglement issue. However, future investigations are required to validate our speculation.

Figure 12: T-GQN training curves. We train t-GQN on our DRoom data until it converges.
Figure 13: Qualitative results of T-GQN on DR-Lvl.3 test data.

D.4 Additional Qualitative Results

Figure 14: Spatial-temporal factorization results of a DRoom scene.
Figure 15: Dynamics replay of a DRoom scene.
Figure 16: Qualitative comparisons of DyMON and MulMON on DRoom. Left: reconstruction performance. Right: spatial-temporal factorization performance. We train DyMON on DR-Lvl.3 and train MulMON on DR0-.
Figure 17: Qualitative comparisons of DyMON and GSWM on DR0-. Top: reconstruction performance. Bottom: segmentation performance (we observe that DyMON outperforms GSWM in segmenting scenes).
Figure 18: Dynamics replay of a MJC-Arm scene.
Figure 19: Dynamics replay of a real scene (i.e. CubeLand data). We conduct experiments on real-world data to show DyMON’s potential for real-world applications.