Physics-as-Inverse-Graphics: Joint Unsupervised Learning of Objects and Physics from Video

05/27/2019 ∙ by Miguel Jaques, et al. ∙ 0

We aim to perform unsupervised discovery of objects and their states such as location and velocity, as well as physical system parameters such as mass and gravity from video -- given only the differential equations governing the scene dynamics. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a physics-as-inverse-graphics approach that brings together vision-as-inverse-graphics and differentiable physics engines. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems). We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. The controller's interpretability also provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Humans have a remarkable ability to estimate physical properties and predict future movements given brief visual observations of objects’ dynamics. This is facilitated by the fact that objects’ motions are often governed by simple laws of physics. These laws impose a regular structure on the objects’ trajectories that allows us to predict where they will be several seconds into the future. Furthermore humans are capable of locating and identifying novel objects in a visual scene, and making estimates of their properties such as mass and elasticity from visual observations of their dynamics and interactions. These capabilities are learned in an unsupervised manner, and enable us to solve physical reasoning tasks by foreseeing future states and acting accordingly. For example, moving to catch a flying ball according to its predicted trajectory, or preparing the requisite degree of stiffness to interact with an object based on its estimated mass.

Current machine learning approaches to such physical modeling tasks either require training by supervised regression from video to object coordinates in order to be able to estimate explicit physics

(Watters et al., 2017; Wu et al., 2017b; Belbute-Peres et al., 2018), or are able to discover and segment objects from video in an unsupervised manner, but do not allow natural integration with a physics engine for long-term predictions, or generate interpretable locations and physical parameters for physical reasoning (Xu et al., 2019; van Steenkiste et al., 2018). In this work, we bridge the gap between unsupervised discovery of objects from video and learning the physical dynamics of a system, including unknown physical parameters and explicit trajectory coordinates. Learning to identify objects and their explicit physical dynamics from video opens the door to long term video prediction; applications in vision-actuated model-based control, where we have access to video streams but not the underlying object states; and even counterfactual physical reasoning.

Our approach, called physics-as-inverse-graphics, solves the physical modeling problem via a novel vision-as-inverse-graphics encoder-decoder system that can render and de-render image components using Spatial Transformers (ST) (Jaderberg et al., 2015) in a way that makes it possible for the latent representation to generate interpretable parameters that can be used directly in a differentiable physics engine. Physics-as-inverse graphics can also be viewed as enabling the incorporation of high-level physical interaction knowledge into the learning process as an inductive bias. This allows us to fit the physical parameters of a scene where the family of differential equations governing the system are known (e.g. objects connected by a spring), but the corresponding parameters are not (e.g. spring constant), only using a video stream, and without having access to ground-truth appearance, position or velocities of the objects. Importantly, the physical parameters and vision/graphics components of the model are learned jointly. It should be emphasized that encoder/decoder design with coordinate-consistency for physics engine integration is the key contribution of this work. The complexity of this task explains why attempts to develop similar models have previously proven unsuccessful.

We apply our model to two challenging tasks: long-term video prediction and visual model-predictive control. First, we evaluate future frame prediction and physical parameter estimation accuracy on 4 datasets with different non-linear interactions (2 objects bouncing off the walls, 2 objects connected by a spring, and 3 objects with gravitational attraction) and different visual difficulty (colored balls on black background, and MNIST (LeCun et al., 1998) digits on CIFAR (Krizhevsky, 2009) background). We then demonstrate data-efficient learning of vision-based model-predictive control by learning the dynamics of an under-actuated inverted pendulum from video. Our framework also uniquely enables goal-paramaterization and physical reasoning for zero-shot adaptation in vision-based control.

2 Related Work

The ability to build inductive bias into models through model structure is a key factor behind the success of modern neural architectures. Convolutional operations capture spatial correlations (Fukushima, 1980) in images, recurrency allows for temporal reasoning (Hochreiter and Schmidhuber, 1997), and spatial transformers (Jaderberg et al., 2015) provide spatial invariance in learning. However, many aspects of common data generation processes are not yet considered by these simple inductive biases. Notably they typically ignore the physical interactions underpinning data generation. For example, it is often the case that the underlying physics of a dynamic visual scene is known, even if specific parameters and objects are not. Incorporation of this information would be beneficial for learning, predicting the future of the visual scene, or control. Physics-as-inverse graphics allows such high-level physical interaction knowledge to be incorporated into learning, even when true object appearance, positions and velocities are not available.

In recent years there has been increased interest in physical scene understanding from video (Fragkiadaki et al., 2015; Finn et al., 2016; Fraccaro et al., 2017; Chang et al., 2017; Zheng et al., 2018; Janner et al., 2019). In order to learn explicit physical dynamics from video we take inspiration from the long literature on neural vision-as-inverse-graphics (Hinton et al., 2011; Kulkarni et al., 2015; Huang and Murphy, 2015; Ellis et al., 2017; Romaszko et al., 2017; Wu et al., 2017a), particularly in the use of spatial transformers (ST) for rendering (Eslami et al., 2016; Rezende et al., 2016; Zhu et al., 2018).

There are several models that assume knowledge of the family of equations governing system dynamics, but where the individual objects are either pre-segmented or their ground-truth positions/velocities are known (Stewart and Ermon, 2017; Wu et al., 2017b; Belbute-Peres et al., 2018). Unsupervised discovery of objects and dynamics from video has also seen increased interest (Xu et al., 2019; van Steenkiste et al., 2018), though such models do not typically use latent representations that can be directly used by a physics engine. For example, Kosiorek et al. (2018) and Hsieh et al. (2018)

use ST’s to locate/place objects in a scene and predict their motion, but they differ from our model in that our coordinate-consistent design obtains explicit cartesian or angular coordinates, allowing us to feed state vectors directly into a differentiable physics engine. Under a similar motivation as our work, but without an inverse-graphics approach,

Ehrhardt et al. (2018) developed an unsupervised model to obtain consistent object locations, though it only applies to cartesian coordinates, not angles or scale.

Within the differentiable physics literature (Degrave et al., 2016), Belbute-Peres et al. (2018)

observed that a multi-layer perceptron (MLP) encoder-decoder with a physics engine, was not able to learn without supervising the physics engine’s output with position/velocity labels (c.f. Fig. 4 in

Belbute-Peres et al. (2018)). While in their case 2% labeled data is enough to allow learning, the transition to no labeled data introduces much greater difficulty than going from 100% to 2% labels. The difficulty of incorporating deterministic physics engines into learning models has prohibited the exploitation of this form of inductive bias. A key contribution of our work is a Coordinate-Consistent Decoder, which makes the transition possible.

Despite recent interest in model-free reinforcement learning approaches, model-based control systems have repeatedly been shown to be more robust and sample efficient

(Mania et al., 2018; Deisenroth and Rasmussen, 2011). Hafner et al. (2019) learn a latent dynamics model (PlaNet) that allows for planning from pixels, which is significantly more sample efficient than model-free learning strategies A3C (Mnih et al., 2016) and D4PG (Barth-Maron et al., 2018). However, when used for control, there is a often a desire for visually grounded controllers operating under known dynamics, which are implicitly verifiable and interpretable (Burke et al., 2019), as these allow for transferability and generality. Unfortunately, system identification can be challenging in vision-based control settings. Byravan et al. (2018)

use supervised learning to segment objects, controlling these using known rigid body dynamics.

Penkov and Ramamoorthy (2019) learn feedforward models with REINFORCE (Williams, 1992) to predict physical states used by a known controller and dynamical model, but this is extremely sample inefficient.

Figure 1: High-level view of the architecture. The encoder estimates the position of objects in each input frame. These are passed to the velocity estimator which estimates objects’ velocities at the last input frame. The positions and velocities of the last input frame are passed as initial conditions to the physics engine. At every time-step, the physics engine outputs a set of positions, which are used by the decoder to produce a predicted frame. Optionally, if the system is actuated, an input action is passed to the physics engine at every time-step. The encoder, velocity estimator, physics engine, and decoder are jointly trained end-to-end with a sequence-to-sequence video frame prediction objective.

3 Unsupervised Learning of Physics via Inverse Graphics

We use a temporal autoencoder architecture consisting of 4 modules trained jointly: an encoder, a velocity estimator, a differentiable physics engine, and a decoder. The architecture is shown in Figure 

1. At a high-level, the encoder computes location coordinates for each of the objects in a single frame; the velocity estimator computes velocities for each object given encoder outputs for frames; the physics engine rolls out the objects’ trajectories given these initial position and velocity estimates; and the decoder outputs an image given the location coordinates of each object.

Encoder The encoder net takes a single frame as input and outputs a vector corresponding to the -dimensional coordinates of each of objects in the scene, . For example, when modelling position in 2D space we have and ; when modelling object angle we have and . The encoder architecture is shown in Figure 1(top right).

To extract each object’s coordinates we use a 2-stage localization approach. First, the input frame is passed through a U-Net (Ronneberger et al., 2015) to produce unnormalized masks. These masks (plus a fixed background mask) are stacked and passed through a softmax to produce

masks, where each input pixel is softly assigned to a mask. The input image is then multiplied by each mask, and a location network (MLP with 2 hidden layers, and 200 Relu units per layer) produces coordinate outputs from each masked input. For a 2D system where the coordinates of each object are its

position (the polar coordinates case is analogous) and the images have dimensions , the encoder output to represents coordinates with values in . To do this, the activation of the encoder’s output layer is a saturating non-linearity .

Velocity estimator The velocity estimator computes the velocity vector of each object at the -th input frame given the coordinates produced by the encoder for this object at the first input frames, . We implement this as a 3 hidden layer MLP with 100 tanh activated units.

Differentiable physics engine The physics engine contains the differential equations governing the system, with unknown physical parameters to be learned – such as spring constants, gravity, mass, etc. Given initial positions and velocities produced by the encoder and velocity estimator, the physics engine rolls out the objects’ trajectories.Our experiments use the Euler method to numerically integrate the differential equations, but more complex engines Chen et al. (2018); Belbute-Peres et al. (2018) could be used.

Coordinate-Consistent Decoder The decoder, , takes as input the positions given by the encoder or physics engine, and outputs a reconstructed/predicted image . The decoder is the most critical part of this system, and is what allows the encoder, velocity estimator and physics engine to learn correctly in a fully unsupervised manner. We therefore describe its design in greater detail.

While an encoder with outputs in the range can represent coordinates in pixel space, it does not mean that the decoder will learn to correctly associate an input vector with an object located at pixel . If the decoder is unconstrained, like a standard MLP, it can very easily learn erroneous, non-linear representations of this Cartesian space. For example, given two different inputs, and , with , the decoder may render those two objects at different vertical positions in the image. While having a correct Cartesian coordinate representation is not strictly necessary to allow physical parameters of the physics engine to be learned from a training set, it is critical to ensure correct future predictions. This is because the relationship between position vector and pixel space position must be fixed: if the position vector changes by , the object’s position in the output image must change by . This is the key realisation that allows us to improve upon Belbute-Peres et al. (2018) in order to learn an encoder and decoder with a physics engine without providing state labels.

In order to impose a correct latent-coordinate to pixel-coordinate correspondence, we use spatial transformers with modified parameters as the decoder’s writing attention mechanism. The transformer parameters are such that a decoder input of , locates the center of the writing attention window at in the image, or that a decoder input of rotates the attention window by . In the original spatial transformer formulation (Jaderberg et al., 2015), the matrix represents the affine transformation applied to the output image to obtain the source image. Therefore, the elements of in Eq. 1 of Jaderberg et al. (2015) do not directly represent translation, scale or angle change between the input and output image. We must therefore find the form of in order for position, angle or scale outputs of the physics engine or decoder to be used directly as inputs to the decoder’s spatial transformer.

For a general affine transformation with translation , angle and scale , we want to modify the source image coordinates according to:

(1)

However, the spatial transformer performs the inverse transformation. Inverting (1) we get:

(2)

Therefore, to use the spatial transformer with consistent coordinates, we construct as the matrix in (2)111In the code, the translation parameters actually correspond to the number of pixels as fraction of the image size, and the difference in resolution between the source and output has to be accounted for. This means that, in practice, instead of , the code uses .. For example, for a system with Cartesian coordinates , and , is:

(3)

The input image to each spatial transformer (one per object) is a learned content and mask , . Additionally there is a learned background content and mask , whose values are fixed. One may think of the content as an RGB image containing the texture of an object and the mask as a grayscale image containing the shape of the object. The content and mask are transformed according to:

(4)

and the pre-activated masks are combined via a softmax across objects:

(5)

The final image is then obtained by combining the resulting masks with the corresponding contents:

(6)

The decoder architecture is shown in Fig. 1, bottom-right. The combined use of ST’s and masks provides a natural way to model depth ordering, allowing us to capture occlusions between objects.

Auxiliary autoencoder loss Using a constrained decoder ensures the encoder and decoder produce objects in consistent locations. However, it is hard to learn the full model from future frame prediction alone, since the encoder’s training signal is exclusively through the physics engine. To alleviate this and quickly build a good encoder/decoder representation, we add a static per-frame autoencoder loss.

Training During training we use input frames and predict the next frames. Defining the frames produced by the decoder via the physics engine as and the frames produced by the decoder using the output of the encoder directly as , the total loss is:

(7)

where is a hyper-parameter. We use mean-squared error loss throughout. During testing we predict an additional frames in order to evaluate long term prediction beyond the length seen for training.

Figure 2: Future frame predictions for 3-ball gravitational system (top) and 2-digit spring system (bottom). Further rollouts for all datasets are shown in the Supplementary Material C.

4 Experiments

4.1 Physical parameter learning and future prediction

Setup We train our model on 4 different systems: two colored balls bouncing off the image edges; two colored balls connected by a spring; and three colored balls with gravitational pull – all on a black background. To test greater visual complexity, we also use 2 MNSIT digits connected by a spring, on a CIFAR background. We train using values of set to , , , and , respectively. For the spring systems the physical parameters to be learned are the spring constant and equilibrium distance , and for the gravitational system it is the gravity constant . In all cases we use objects with mass . We provide exact descriptions of the equations used in these systems and other training details in Supplementary Material A and B.

All datasets consist of 5000 sequences for training, 500 for validation, and 500 for testing. We use a learnable ST scale parameter initialized at and in the balls and digits datasets, respectively. In all cases we use . We compare our model to the recently proposed DDPAE (Hsieh et al., 2018)222Using the code provided by the authors., which uses an inverse-graphics model with black-box dynamics, and a variant of the VideoLSTM (Srivastava et al., 2015), which uses black-box encoding, decoding and dynamics. DDPAE does not support scenes with background, so is excluded in the MNIST system.

Figure 3: Frame prediction accuracy (SSI, higher is better) for the balls datasets. Left of the green dashed line corresponds to the training range, , right corresponds to extrapolation, . We outperform DDPAE (Hsieh et al., 2018) and VideoLSTM (Srivastava et al., 2015) in extrapolation due to incorporating explicit physics.

Results Future frame predictions for two of the systems are shown in Figure 2, and per-step Structural Similarity Index (SSI) 333

We choose SSI over MSE as an evaluation metric as it is more robust to pixel-level differences and alignment. However the results with MSE are similar.

of the models on the prediction and extrapolation range are shown in Figure  3. While all models obtain low error in the prediction range (up to the green dashed line), our model is significantly better in the extrapolation range. Even many steps into the future, our model’s predictions are still highly accurate; unlike competitors (Figure 2). This shows the value of using an explicit physics model in systems where the dynamics are non-linear yet well defined.

This difference is made larger by the fact that in some of these systems the harder-to-predict parts of the dynamics do not appear during training. For example, in the gravitational system, whiplash from objects coming in close contact is seldom present in the first steps given in the training set, but it happens frequently in the extrapolation steps evaluated during testing. A model without a sufficiently strong inductive bias on the dynamics is simply not able to correctly infer close distance behavior from long distance behavior. Our model’s physics engine, having at most a few learneable parameters, can correctly predict long term trajectories even when trained on data representing only a small fraction of all possible dynamics. We encourage the reader to watch the videos of further rollouts for all the datasets at https://bit.ly/2Y0KYMT. Table 1 shows that our model finds physical parameters close to the ground-truth values used to generate the datasets, and Figure 4 (left) shows the contents and masks learned by the decoder.

Figure 4: Left: Visualization of the contents and masks learned by the decoder. Object masks . Object content for rendering . Contents and masks correctly capture each part of the scene: colored balls, MNIST digits and CIFAR background. We omit the black background learned on the balls dataset. Right: Prediction extrapolates to unseen parts of the image.
Dataset 2-balls spring 2-digits spring 3-balls gravity
Parameters
Learned value
Ground-truth value
Table 1: Physical parameters learned from video are within 10% of true system parameters.

Ablation study Since the encoder and decoder must discover the objects present in the image and the corresponding locations, one might assume that the velocity estimator and physics engine could be learned using only the prediction loss and encoder/decoder using only the static autoencoder loss, i.e., without joint training. In Table 2 we compare the performance of three variants on the 3-ball gravity dataset: joint training using only the prediction loss; joint training using the prediction and autoencoder losses; training the encoder/decoder on the autoencoder loss and the velocity estimator and physics engine on the prediction loss; and joint training but using an MLP black-box decoder.

Train using only separate grads. joint black-box decoder, joint
31.4 28.1 1.39 30.9
20.5 0.22 0.63 2.87
Table 2: Validation loss under different training conditions. ‘Separate grads’: Train encoder/decoder on , and velocity estimator and physics engine on . ‘Black-box decoder, joint’: Joint training using a standard MLP network as the decoder. Only joint training of the model using coordinate-consistent decoder succeeds.

We can see that only joint prediction and autoencoder loss obtain satisfactory performance, and that the use of the coordinate-consistent decoder is critical. The prediction loss is essential in order for the model to learn encoders/decoders whose content and masks can be correctly used by the physics engine. This can be understood by considering how object interaction influences the decoder. In the gravitational system, the forces between objects depend on their distances, so if the objects swap locations, the forces must be the same. If the content/mask learned for each object are centered differently relative to its template center, rendering the objects at positions and , or and will produce different distances between these two objects in image space. This violates the permutation invariance property of the system. Learning the encoder/decoder along with the velocity estimator and physics engine on the prediction loss allows the encoder and decoder to learn locations and contents/masks that satisfy the characteristics of the system and allows the physics to be learned correctly.

Extrapolation to unseen image regions One limitation of standard fully-connected or deconvolutional decoders is inability to decode states corresponding to object poses or locations not seen during training. For example, if in the training set no objects appear in the bottom half of the image, a fully-connected decoder will simply learn to output zeros in that region. If in the test set objects move into the bottom half of the image, the decoder will still output zeros, because it lacks the inductive bias necessary to correctly extrapolate in image space. In contrast, a rendering decoder is be able to correctly decode states not seen during training (Figure 4, right). In the limit that our renderer corresponds to a full-blown graphics-engine, any pose, location, color, etc. not seen during training can still be rendered correctly. This property gives models using rendering decoders, such as ours and Hsieh et al. (2018), an important advantage in terms of data-efficiency. We note, however, that in general this advantage does not apply to correctly inferring the states from images whose objects are located in regions not seen during training. This is because the encoders used are typically composed simply of convolutional and fully-connected layers, having limited de-rendering inductive biases.

4.2 Vision-based model-predictive control

Setup One of the main applications of our method is to learn the (actuated) dynamics of a physical system from video, thus enabling vision-based planning and control. Here we apply it to the pendulum from OpenAI Gym (Brockman et al., 2016) – one typically solved from proprioceptive state, not pixels. For training we collect 5000 sequences of 14 frames with random initialization () and actions (). The physical parameters to learn are gravity and actuation coefficient . We use and . We use the trained MPC model as follows. At every step, the previous 4 frames are passed to the encoder and velocity nets to estimate . This is passed to the physics engine with learned parameters and . We perform 100-step model-predictive control using the cross entropy method (Rubinstein, 1997), exactly as described in Hafner et al. (2019), setting vertical position and zero velocity as the goal. We compare our model to an oracle model, which has the true physical parameters and access to the true pendulum position and velocity (not vision-based), as well as a concurrent state-of-the art model-based RL method (PlaNet (Hafner et al., 2019)), and a model-free444DDPG, TRPO and PPO learned from pixels failed to solve the pendulum, highlighting the complexity of the vision-based pendulum control task and brittleness of model-free reinforcement learning strategies. deep deterministic policy gradient (DDPG) agent (Lillicrap et al., 2016). To provide an equivalent comparison to our model, we train PlaNet on random episodes.

Results In terms of system identification, our model recovers the correct gravity and force coefficient values from vision alone, which is a prerequisite to perform correct planning and control. Figure 5 (left) highlights the data efficiency of our method, which is comparable to PlaNet, while being dramatically faster than DDPG from pixels. Importantly, the interpretibility of the explicit physics in our model provides some unique capabilities. We can perform simple counter-factual physical reasoning such as ‘How should I adapt my control policy if gravity was increased?’, which enables zero-shot adaptation to new environmental parameters. Figure 5 (middle) shows that our model can exploit such reasoning to succeed immediately over a wide variety of different gravities. Similarly, while the typical inverted pendulum goal is to balance the pendulum upright, interpretable physics means that this is only one point in a space of potential goals. Figure 5 (right) evaluates the goal-paramaterized control enabled by our model. Any feasible target angle specified can be directly reached by the controller. There is generalisation across the space of goals even though only one goal (vertical) was seen during training. Importantly these last two capabilities are provided immediately by our model, but cannot be achieved without further adaptive learning via alternatives that are reward-based Hafner et al. (2019); Mnih et al. (2016) or rely on implicit physics Hafner et al. (2019).

Figure 5: Comparison between our model and PlaNet Hafner et al. (2019) in terms of learning sample efficiency (left). Explicit physics allows reasoning for zero-shot adaptation to domain-shift in gravity (center) and goal-driven control to balance the pendulum in any position (right). DDPG (VAE) corresponds to a DDPG agent trained on the latent space of an autoencoder (trained with 320k images) after 80k steps. DDPG (proprio) corresponds to an agent trained from proprioception after 30k steps.

5 Conclusion

Physics-as-inverse graphics provides a valuable mechanism to include inductive bias about physical data generating processes into learning. This allows lightly supervised object tracking and system identification, in addition to sample efficient, generalisable and flexible control. However, incorporating this structure into lightly supervised deep learning models has proven challenging to date. We introduced a model that accomplishes this, relying on a coordinate-consistent decoder that enables image reconstruction from physics. We have shown that our model is able to perform accurate long term prediction and that it can be used to learn the dynamics of an actuated system, allowing us to perform vision-based model-predictive control.

Acknowledgements

We thank Paul Micaelli, Luke Darlow, Ben Rhodes, Chenyang Zhao and Nick Pawlowski for useful discussions and feedback. We thank Ondrey Mello for proof-reading early versions of this work. This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.

References

  • Barth-Maron et al. (2018) Barth-Maron, G., Hoffman, M. W., Budden, D., Dabney, W., Horgan, D., Muldal, A., Heess, N., and Lillicrap, T. (2018). Distributed distributional deterministic policy gradients. ICLR.
  • Belbute-Peres et al. (2018) Belbute-Peres, F. D. A., Smith, K. A., Allen, K. R., Tenenbaum, J. B., and Kolter, J. Z. (2018). End-to-End Differentiable Physics for Learning and Control. In NIPS.
  • Brockman et al. (2016) Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). OpenAI Gym.
  • Burke et al. (2019) Burke, M., Penkov, S., and Ramamoorthy, S. (2019). From explanation to synthesis: Compositional program induction for learning from demonstration. Robotics: Science and Systems (R:SS).
  • Byravan et al. (2018) Byravan, A., Leeb, F., Meier, F., and Fox, D. (2018). SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Planning and Control. In ICRA.
  • Chang et al. (2017) Chang, M. B., Ullman, T., Torralba, A., and Tenenbaum, J. B. (2017). A Compositional Object-Based Approach to Learning Physical Dynamics. In ICLR.
  • Chen et al. (2018) Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. (2018).

    Neural Ordinary Differential Equations.

    In NIPS.
  • Degrave et al. (2016) Degrave, J., Hermans, M., Dambre, J., and wyffels, F. (2016). A Differentiable Physics Engine for Deep Learning in Robotics.
  • Deisenroth and Rasmussen (2011) Deisenroth, M. and Rasmussen, C. E. (2011). Pilco: A model-based and data-efficient approach to policy search. In ICML.
  • Ehrhardt et al. (2018) Ehrhardt, S., Monszpart, A., Vedaldi, A., and Mitra, N. (2018). Unsupervised Intuitive Physics from Visual Observations. CoRR, abs/1805.08095.
  • Ellis et al. (2017) Ellis, K., Ritchie, D., Solar-Lezama, A., and Tenenbaum, J. B. (2017). Learning to Infer Graphics Programs from Hand-Drawn Images.
  • Eslami et al. (2016) Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., and Hinton, G. E. (2016). Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. In NIPS.
  • Finn et al. (2016) Finn, C., Goodfellow, I., and Levine, S. (2016). Unsupervised Learning for Physical Interaction through Video Prediction. In NIPS.
  • Fraccaro et al. (2017) Fraccaro, M., Kamronn, S., Paquet, U., and Winther, O. (2017). A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning. In NIPS.
  • Fragkiadaki et al. (2015) Fragkiadaki, K., Agrawal, P., Levine, S., and Malik, J. (2015). Learning Visual Predictive Models of Physics for Playing Billiards.
  • Fukushima (1980) Fukushima, K. (1980).

    Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.

    Biological cybernetics, 36(4):193–202.
  • Hafner et al. (2019) Hafner, D., Lillicrap, T., Fischer, I., Villegas, R., Ha, D., Lee, H., and Davidson, J. (2019). Learning latent dynamics for planning from pixels. ICML.
  • Hinton et al. (2011) Hinton, G. E., Krizhevsky, A., and Wang, S. D. (2011). Transforming auto-encoders. In ICANN, pages 44–51.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735–1780.
  • Hsieh et al. (2018) Hsieh, J.-T., Liu, B., Huang, D.-A., Fei-Fei, L., and Niebles, J. C. (2018). Learning to Decompose and Disentangle Representations for Video Prediction. In NIPS.
  • Huang and Murphy (2015) Huang, J. and Murphy, K. (2015). Efficient Inference in Occlusion-Aware Generative Models of Images. CoRR, abs/1511.06362.
  • Jaderberg et al. (2015) Jaderberg, M., Simonyan, K., Zisserman, A., and Kavukcuoglu, K. (2015). Spatial Transformer Networks. In NIPS.
  • Janner et al. (2019) Janner, M., Levine, S., Freeman, W. T., Tenenbaum, J. B., Finn, C., and Wu, J. (2019). Reasoning About Physical Interactions with Object-Oriented Prediction and Planning. In ICLR.
  • Kosiorek et al. (2018) Kosiorek, A. R., Kim, H., Posner, I., and Whye Teh, Y. (2018). Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. In NIPS.
  • Krizhevsky (2009) Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images.
  • Kulkarni et al. (2015) Kulkarni, T. D., Whitney, W., Kohli, P., and Tenenbaum, J. B. (2015). Deep Convolutional Inverse Graphics Network. In NIPS.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278–2324.
  • Lillicrap et al. (2016) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2016). Continuous control with deep reinforcement learning. ICLR.
  • Mania et al. (2018) Mania, H., Guy, A., and Recht, B. (2018). Simple random search provides a competitive approach to reinforcement learning. NIPS.
  • Mnih et al. (2016) Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. In ICML.
  • Penkov and Ramamoorthy (2019) Penkov, S. and Ramamoorthy, S. (2019). Learning programmatically structured representations with perceptor gradients. ICLR.
  • Rezende et al. (2016) Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., and Wierstra, D. (2016). One-Shot Generalization in Deep Generative Models. In ICML.
  • Romaszko et al. (2017) Romaszko, L., Williams, C. K. I., Moreno, P., and Kohli, P. (2017). Vision-as-Inverse-Graphics: Obtaining a Rich 3D Explanation of a Scene from a Single Image. In ICCV.
  • Ronneberger et al. (2015) Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI.
  • Rubinstein (1997) Rubinstein, R. Y. (1997). Optimization of computer simulation models with rare events. EJOR.
  • Srivastava et al. (2015) Srivastava, N., Mansimov, E., and Salakhutdinov, R. (2015). Unsupervised Learning of Video Representations using LSTMs. In ICML.
  • Stewart and Ermon (2017) Stewart, R. and Ermon, S. (2017). Label-Free Supervision of Neural Networks with Physics and Domain Knowledge. In AAAI.
  • van Steenkiste et al. (2018) van Steenkiste, S., Chang, M., Greff, K., and Schmidhuber, J. (2018).

    Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions.

    In ICLR.
  • Watters et al. (2017) Watters, N., Tacchetti, A., Weber, T., Pascanu, R., Battaglia, P., and Zoran, D. (2017). Visual Interaction Networks: Learning a Physics Simulator from Video. In NIPS.
  • Williams (1992) Williams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229–256.
  • Wu et al. (2017a) Wu, J., Csail, M., Tenenbaum, J. B., and Kohli, P. (2017a). Neural Scene De-rendering. In CVPR.
  • Wu et al. (2017b) Wu, J., Lu, E., Kohli, P., Freeman, W. T., and Tenenbaum, J. B. (2017b). Learning to See Physics via Visual De-animation. In NIPS.
  • Xu et al. (2019) Xu, Z., Liu, Z., Sun, C., Research, G., Murphy, K., Freeman, W. T., Tenenbaum, J. B., and Wu, J. (2019). Unsupervised Discovery of Parts, Structure, and Dynamics. In ICLR.
  • Zheng et al. (2018) Zheng, D., Luo, V., Wu, J., and Tenenbaum, J. B. (2018). Unsupervised Learning of Latent Physical Properties Using Perception-Prediction Networks. In UAI.
  • Zhu et al. (2018) Zhu, G., Huang, Z., and Zhang, C. (2018). Object-Oriented Dynamics Predictor. In NIPS.