Forward Prediction for Physical Reasoning

06/18/2020 ∙ by Rohit Girdhar, et al. ∙ Facebook 0

Physical reasoning requires forward prediction: the ability to forecast what will happen next given some initial world state. We study the performance of state-of-the-art forward-prediction models in complex physical-reasoning tasks. We do so by incorporating models that operate on object or pixel-based representations of the world, into simple physical-reasoning agents. We find that forward-prediction models improve the performance of physical-reasoning agents, particularly on complex tasks that involve many objects. However, we also find that these improvements are contingent on the training tasks being similar to the test tasks, and that generalization to different tasks is more challenging. Surprisingly, we observe that forward predictors with better pixel accuracy do not necessarily lead to better physical-reasoning performance. Nevertheless, our best models set a new state-of-the-art on the PHYRE benchmark for physical reasoning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

When presented with a picture of a Rube Goldberg111https://en.wikipedia.org/wiki/Rube_Goldberg_machine machine, humans can predict how the machine works. We do so by using our intuitive understanding of concepts such as force, mass, energy, collisions, etc., to imagine

how the machine state would evolve once released. This ability allows us to solve real world physical-reasoning tasks, such as how to hit a billiards cue such that the ball ends up in the pocket, or how to balance the weight of two children on a see-saw. In contrast, physical-reasoning abilities of machine-learning models have largely been limited to closed domains such as predicting dynamics of multi-body gravitational systems 

battaglia2016interaction ; kipf2020contrastive , stability of block towers lerer2016learning ; groth2018shapestacks or physical plausibility of observed dynamics riochet2018intphys . In this work, we explore the use of imaginative, forward-prediction approaches to solve complex physical-reasoning puzzles. We study modern object-based battaglia2016interaction ; gonzalez2020learning ; watters2017visual and pixel-based finn2016unsupervised ; ye2019compositional ; hafner2020dream forward-prediction models in simple search-based agents on the PHYRE benchmark bakhtin2019phyre . PHYRE tasks involve placing one or two balls in a 2D world, such that the world reaches a state with a particular property (e.g., two balls are touching) after being played forward; see Figure 1 for an example. Given a perfect forward-prediction model, physical reasoning in PHYRE is straightforward: it involves finding a ball placement for which the model predicts a terminal state that solves the task at hand.

Our best models substantially outperform the prior state-of-the-art on PHYRE. Specifically, we find that forward-prediction models can improve the performance of physical-reasoning agents when the models are trained on tasks that are very similar to the tasks that need to be solved at test time. However, forward-prediction models do not appear to generalize well to unseen tasks, presumably, because small deviations in forward predictions tend to compound over time. We also observe that better forward prediction does not always lead to better physical-reasoning performance. In particular, we find that object-based forward-prediction models make more accurate forward predictions but pixel-based models are more helpful in physical reasoning. This observation may be the result of two key advantages of models using pixel-based state representations. First, it is easier to determine whether a task is solved in a pixel-based representation than in an object-based one. Second, pixel-based models facilitate end-to-end training of the forward-prediction model and the task-solution model in a way that object-based models do not in the absence of a differentiable renderer liu2019rasterizer ; loper2014 .

2 Related Work

Our study builds on a large body of prior research on forward prediction and physical reasoning.

Forward prediction. The goal of forward-prediction models is to accurately predict the future state of objects in the world based on observations of past states. Forward-prediction models operate either on object-based (proprioceptive) representations or on pixel-based state representations. A popular class of object-based

forward-prediction models use graph neural networks to model interactions between objects 

kipf2018neural ; battaglia2016interaction , for example, to successfully simulate environments with thousands of particles gonzalez2020learning ; li2018learning . Another class of object-based models explicitly represents the Hamiltonian or Lagrangian of the physical system greydanus2019hamiltonian ; cranmer2020lagrangian ; chen2019symplectic . While promising, such models are currently limited to simple point objects and physical systems that conserve energy. Hence, they are not applicable to our exploration on PHYRE, which contains dissipative forces and extended objects.

Modern pixel-based forward-prediction models extract state representations by applying a convolutional network on the observed frame(s) watters2017visual ; kipf2020contrastive or on object segments ye2019compositional ; janner2019reasoning . The models perform forward prediction on the resulting state representation using graph neural networks kipf2020contrastive ; ye2019compositional

, recurrent neural networks 

xingjian2015convolutional ; hochreiter1997long ; cho2014learning ; finn2016unsupervised , or a physics engine wu2017vda . The models can be trained to predict object state watters2017visual , to perform pixel reconstruction villegas2017learning ; ye2019compositional , to transform the previous frames ye2018interpretable ; ye2019compositional ; finn2016unsupervised , or to produce a contrastive state representation kipf2020contrastive ; hafner2020dream . One recent model li2020visualgrounding transforms the observed frame into a particle representation and runs a neural particle simulator li2018learning ; gonzalez2020learning on that representation.

Physical reasoning. Tasks for physical reasoning gauge a system’s ability to intuitively reason about physical phenomena battaglia2013simulation ; kubricht2017intuitive . Prior work on physical reasoning has developed models that predict whether physical structures are stable lerer2016learning ; groth2018shapestacks , predict whether physical phenomena are plausible riochet2018intphys , describe or answer questions about physical systems yi2019clevrer ; rajani2020esprit , perform counterfactual prediction in physical worlds baradel2020cophy , or solve physical puzzles allen2019tools ; bakhtin2019phyre . Unlike other physical reasoning tasks, physical-puzzle benchmarks such as PHYRE bakhtin2019phyre and Tools allen2019tools incorporate a full physics simulator. This makes these benchmarks particularly suitable for studying the effectiveness of forward prediction for physical reasoning. We adopt the PHYRE benchmark in our study for that reason.

Figure 1: PHYRE tasks require placing an object (the red ball) in the scene, such that when the simulation is rolled out, the blue and green objects touch for at least three seconds. In (a), the ball is too small and does not knock the green ball off the platform. In (b), the ball is larger and solves the task. In (c), the ball is placed slightly farther left, which results in the task not being solved. Small variations can have large effects on the roll-out and the effectiveness of an action.

3 PHYRE Benchmark

In PHYRE, each task consists of an initial state that is a image. Colors indicate object properties; for instance, black objects are static while gray objects are dynamic and neither are involved in the goal state. PHYRE defines two task tiers (B and 2B) that differ in their action space. An action involves placing one ball (in the B tier) or two balls (in the 2B tier) in the image. Balls are parameterized by their position and radius, which determine the ball’s mass. An action solves the task if the blue or purple object touches the green object (the goal state) for a minimum of three seconds when the simulation is rolled out. Figure 1 illustrates the challenging nature of PHYRE tasks: small variations can change incorrect actions (Figure 1(a) and (c)) into a correct solution (Figure 1(b)).

Each tier in PHYRE contains 25 task templates. A task template contains 100 tasks that are structurally similar but that differ in the initial positions of the objects. Performance on PHYRE is measured in two settings. The within-template setting defines a train-test split over tasks, such that training and test tasks can contain different instantiations of the same template. The cross-template setting splits across templates, such that training and test tasks never correspond to the same template. A PHYRE agent can make multiple attempts at solving a task. The performance of the agent is measured by the area under the success curve (AUCCESS; bakhtin2019phyre ), which ranges from 0 to 100 and is higher when the agent needs fewer attempts to solve a task. Performance is averaged over 10 random splits of tasks or templates. In addition to AUCCESS, we also measure a forward-prediction accuracy (FPA) that does not consider whether an action solves a task. We define FPA as the percentage of pixels that match the ground-truth in a 10-second rollout at 1 frame per second (fps); we only consider pixels that correspond to dynamic objects when computing forward-prediction accuracy.

4 Methods

Figure 2: We study models that take as input a set of initial state via an object-based or a pixel-based representation (blue box). We input the representation into a range of forward-prediction models, which generally comprise an encoder (yellow box), a dynamics model (green box), and a decoder (gray box). We feed that output to a task-solution model

(red box) that predicts whether the goal state is reached. At inference time, we search over actions that alter the initial state by adding additional objects to the state. For each action (and corresponding initial state) we predict a task-solution probability; we then select the action most likely to solve the task.

We develop physical-reasoning agents for PHYRE that use learned forward-prediction models in a search strategy to find actions that solve a task. The search strategy maximizes the score of a task-solution model that, given a world state, predicts whether that state will lead to a task solution. Figure 2 illustrates how our forward-prediction and task-solution models are combined. We describe both types of models, as well as the search strategy we use, separately below.

4.1 Forward-Prediction Models

At time , a forward-prediction model aims to predict the next state, , of a physical system based on a series of past states of that system. consists of a state encoder , a forward-dynamics model , and a state decoder . The past states are first encoded into latent representations using a learned encoder with parameters , i.e., . The latent representations are then passed into the forward-dynamics model with parameters : . Finally, the predicted future latent representation is decoded using the decoder with parameters : . We learn the model parameters on a large training set of observations of the system’s dynamics. We experiment with forward-prediction models that use either object-based or pixel-based state representations.

Object-based models. We experiment with two object-based forward-prediction models that capture interactions between objects: interaction networks battaglia2016interaction and transformers vaswani2017attention . Both object-based forward-prediction models represent the system’s state as a set of tuples that contain object type (ball, stick, etc.), location, size, color, and orientation. The models are trained by minimizing the mean squared error between the predicted and observed state representations on the training data.

  • [leftmargin=*,noitemsep,topsep=0pt]

  • Interaction networks battaglia2016interaction (IN)

    maintain a vector representation for each object in the system at time

    . Each vector captures information about the object’s type, position, and velocity. A relationship is computed for each ordered pair of objects, designating the first object as the sender and the second as the receiver of the relation. The relation is characterized by the concatenation of the two objects’ feature vectors and a one-hot encoding representing the sender object’s attribute of static or dynamic. The dynamics model embeds the relations into “effects” per object using a multilayer perceptron (MLP). The effects exerted on an object are summed into a single effect per object. This aggregated effect is concatenated with the object’s previous state vector, from a particular temporal offset, along with a placeholder for external effects,

    e.g., gravity. The result is passed through another MLP to predict velocity of the object. We use two interaction networks with different temporal offsets watters2017visual , and aggregate the results in an MLP to generate the final velocity prediction. The decoder then sums the object’s predicted velocity with the previous position to obtain the new position of the object.

  • Transformers vaswani2017attention (Tx) also maintain a representation per object: they encode each object’s state using a 2-layer MLP. In contrast to IN, the dynamics model in Tx is a Transformer that uses self-attention layers over the latent representation to predict the future state. We add a sinusoidal temporal position encoding vaswani2017attention of time to the features of each object. The resulting representation is fed into a Transformer encoder with 6 layers and 8 heads. The output representation is decoded through a MLP and added to the previous state to obtain the future state prediction.

Pixel-based models. In contrast to object-based models, pixel-based forward-prediction models do not assume direct access to the attribute values of the objects. Instead, they operate on images depicting the object configuration, and maintain a single, global world state that is extracted by an image encoder. Our image encoder is a ResNet-18 network He_16 that is clipped at the res4 block. Objects in PHYRE can have seven different colors; hence, the input of the network consists of seven channels with binary values that indicate object presence. The representations extracted from the past frames are concatenated before being input into the two dynamics models that we study.

  • [leftmargin=*,noitemsep,topsep=0pt]

  • Spatial transformer networks (STN) jaderberg2015spatial split the input frame into segments by detecting objects ye2019compositional , and then encode each object using the encoder . Specifically, we use a simple connected components algorithm weaver1985centrosymmetric to split each frame channel into object segments. The dynamics model concatenates the object channels for the input frames, and predicts a rotation and translation for each channel corresponding to the last frame using a small convolutional network. The predicted transformation is applied to each channel independently in the decoder. The resulting channels are combined into a single frame prediction by summing them. Inspired by the training of keypoint localizers he2017mask , we train STNs by minimizing the spatial cross-entropy, which sums the cross-entropy values of a softmax prediction over all seven channels.

  • Deconvolutional networks (Dec) directly predict the pixels in the next frame using a deconvolutional network that does not rely on a segmentation of the input frame(s). The representations for the last frames are concatenated along the channel dimension, and passed through a small convolutional network to generate a latent representation for the frame. Latent representation

    is then decoded to pixels using a deconvolutional network, implemented as series of five transposed-convolution and (bilinear) upsampling layers, with intermediate ReLU activation functions. We found

    Decs are best trained by minimizing the per-pixel cross-entropy, which sums the cross-entropy of seven-way softmax predictions at each pixel.

4.2 Task-Solution Models

We use our forward-prediction models in combination with a task-solution model that predicts whether a rollout solves a physical-reasoning task. In the physical-reasoning tasks we consider, the task-solution model needs to recognize whether two particular target objects are touching (task solved) or not (task not solved). This recognition is harder than it seems, particularly when using an object-based state representation. For example, evaluating whether or not the centers of two balls are “near” is insufficient because the radiuses of the balls need to be considered as well. For more complex objects, the model needs to evaluate complex relations between the two objects, as well as recognize other objects in the scene that may block contact. We note that good task-solution models may also correct for errors made by the forward-prediction model.

Per Figure 2

, we implement the task solution model using a simple binary classifier

with parameters . The classifier receives the (initial and predicted) frames and/or latent representations as input from the forward-prediction model. It then produces a binary prediction: . Because both types of forward-prediction models produce different outputs, we experiment with object-based classifiers and pixel-based classifiers that make predictions based on simulation state represented by object features or pixels respectively. We also experiment with pixel-based classifiers on object-based forward-prediction models by rendering the object-based state to pixels first.

  • [leftmargin=*,noitemsep,topsep=0pt]

  • Object-based classifier (Tx-Cls). We use a Transformer vaswani2017attention model that encodes the object type and position into a 128-dimensional encoding using a two-layer MLP. As before, a sinusoidal temporal position encoding is added to each object’s features. The resulting encodings for all objects over the time steps are concatenated, and used in a 16-head, 8-layer transformer encoder with LayerNorm. The resulting representations are input into another MLP that performs the binary classification indicating whether or not the task is solved (according to the model).

  • Pixel-based classifier (Conv3D-{Latent,Pixel}). Our pixel-based classifier poses the problem of classifying task solutions as a video-classification problem. Specifically, we adopt a 3D convolutional network for video classification Tran_15 ; tran2018closer ; carreira2017quo . We experiment with two variants of this model: (1) Conv3D-Latent: the latent state representations (, ) are concatenated along the temporal dimension, and passed through a stack of 3D convolutions with intermediate ReLUs followed by a linear classifier; and (2) Conv3D-Pixel: the pixel representations (, ) are encoded using a ResNet-18 (up to res4), and classifications are made by the Conv3D-Latent model. Note that Conv3D-Pixel can also be used in combination with object-based forward-prediction models, as the predictions of those models can be rendered to pixels.

4.3 Search Strategy

We compose a forward-prediction model and a task-solution model to form a scoring function for action proposals. An action adds one or more additional objects to the initial world state. We sample actions uniformly at random and evaluate the value of the scoring function for the sampled actions. To evaluate the scoring function, we alter the initial state with the action, use the resulting state as input into the forward-prediction model, and evaluate the task-solution model on the output of the forward-prediction model. The search strategy selects the action that is most likely to solve the task according to the task-solution model, based on the output of the forward-prediction model.


    Within-template    Cross-template AUCCESS     Seconds ()     Seconds ()
Figure 3: AUCCESS of object-based () and pixel-based () task-solution model (-axis) applied on state obtained by rolling out an oracle forward-prediction model for seconds (-axis). AUCCESS of the OPTIMAL agent bakhtin2019phyre

is shown for reference. Shaded regions indicate standard deviation across

folds. We note object-based task-solution models struggle compared to pixel-based ones, especially in cross-template setting.

    Within-template    Cross-template AUCCESS     Seconds ()     Seconds ()
Figure 4: AUCCESS of task-solution model applied on state obtained by rolling out five object-based () and pixel-based () forward-prediction models (-axis) for seconds (-axis). Forward-prediction models were initialized with ground-truth states. AUCCESS of agent without forward prediction (No-fwd) is shown for reference. Results are presented for the within-template (left) and cross-template (right) settings.

5 Experiments

We evaluate the performance of forward-prediction models on B-tier of the challenging PHYRE benchmark bakhtin2019phyre . We present our experimental setup and the results of our experiments below. The code and trained models to reproduce our results will be made available.

5.1 Experimental Setup

Training. To generate training data for our models, we sample task-action pairs in a balanced way: half of the samples solve the task and the other half do not. We generate training examples for the forward-prediction models by obtaining frames from the simulator at 1 fps, and sampling consecutive frames used to bootstrap the forward model from a random starting point in this obtained rollout. The model is trained to predict frames that succeed the selected frames. For the task-solution model, we always sample frames from the starting point of the rollout, or frame 0. Along with these frames, the task-solution model also gets the autoregressively predicted frames from the forward-prediction model as input. We use for most experiments, and eventually relax this constraint to use frame when comparing to the state-of-the-art in the next section.

We train most forward-prediction models using teacher forcing williams1989learning : we only use ground-truth states as input into the forward model during training. The only exception is Dec, for which we observed better performance when predicted states are used as input when training. Furthermore, since Dec is trainable without teacher forcing, we are able to train it jointly with the task-solution model, as it no longer requires picking a random point in the rollout to train the forward model. In this case, we train both models from frame 0 of each simulation with equal weights on both losses, and refer to this model as Dec [Joint]. For object-based models, we add a small amount of Gaussian noise to object states during training to make the model robust battaglia2016interaction . We train all task-solution models and pixel-based forward-prediction models using mini-batch SGD. Object-based forward-prediction models are trained with Adam kingma2014adam

. We selected hyperparameters for each model based on the AUCCESS on the first fold in the within-template setting; see the appendix for details.

Evaluation. At inference time, we bootstrap the forward-prediction models with initial ground-truth states from the simulator for a given action, and autoregressively predict future states. The states are then passed into the task-solution model to predict whether the task will be solved or not by this action. Following bakhtin2019phyre , we use the task-solution model to score a fixed set of (unless otherwise specified) randomly selected actions for each task. We rank these actions based on the task-solution model score to measure AUCCESS. We also measure forward-prediction accuracy (FPA; see Section 3) on the validation tasks for 10 random actions each, half of which solve the task and the other half that do not. Following bakhtin2019phyre , we repeat all experiments for 10 folds and report the mean and standard deviation of both the AUCCESS and FPA measures.


FPA AUCCESS       Seconds       FPA
Figure 5: Left: Forward-prediction accuracy (FPA; -axis) after seconds (-axis) rolling out five forward-prediction models. Right: Maximum AUCCESS value across roll-out (-axis) as a function of forward-prediction accuracy averaged over seconds (-axis) for five forward-prediction models. Shaded regions and error bars indicate standard deviation over folds. Both shown for within-template setting.
Table 1: AUCCESS and success percentage of our Dec [Joint] agents using (no roll-out, frame-level model) and (full roll-out) compared to current state-of-the-art agents bakhtin2019phyre on the PHYRE benchmark. In contrast to prior experiments, all agents here are conditioned on initial frame. Our agents outperform all prior work on both settings and metrics. AUCCESS Success %age Within Cross Within Cross RAND bakhtin2019phyre 13.70.5 13.05.0 7.70.8 6.85.0 MEM bakhtin2019phyre 2.40.3 18.55.1 2.70.5 15.25.9 DQN bakhtin2019phyre 77.61.1 36.89.7 81.41.9 34.510.2 Ours () 76.70.9 40.77.7 80.71.5 40.18.2 Ours () 80.01.2 40.38.0 84.11.8 39.28.6

5.2 Results

We organize our experimental results based on a series of research questions.

Can perfect forward prediction solve physical reasoning? We first evaluate if perfect forward-prediction can solve physical reasoning on PHYRE. We do so by using the PHYRE simulator as the forward-prediction model, and applying task-solution models on the predicted state. We exclude the Conv3D-Latent task-solution model in this experiment because it requires the latent representation of learned forward-prediction model, which the simulator can not provide. Figure 4 shows the AUCCESS of these models as a function of the number of seconds the forward-prediction model is rolled out. We compare model performance with that of the OPTIMAL agent bakhtin2019phyre , which is an agent that achieves the maximum attainable performance given that we rank only actions. We observe that task-solution models can work nearly perfectly in the within-template setting when the forward-prediction is rolled out for seconds. We also observe that pixel-based task-solution models perform better than object-based models, especially in the cross-template setting. This suggests that it is more difficult for object-based models to determine whether or not two objects are touching than for pixel-based models, because the computations required are more complex. In preliminary experiments, we found that the Conv3D-Latent performs better than Conv3D-Pixel when combined with learned pixel-based forward-prediction models (see appendix). In following experiments, we therefore use Conv3D-Latent as the task-solution model for pixel-based forward-prediction models, and Conv3D-Pixel for object-based models, by rendering object-based predictions to pixels.

How well do forward-prediction models solve physical reasoning? We evaluate performance of our learned forward-prediction models on the PHYRE tasks. Akin to the previous experiment, we roll out the forward-prediction model for seconds and evaluate the corresponding task-solution model on the state predictions and the input states. Figure 4 presents the AUCCESS of this approach as a function of the number of seconds () that the forward-prediction models were rolled out. The AUCCESS of an agent without forward prediction (No-fwd) is shown for reference. The results show that forward-prediction can improve AUCCESS by up to points in the within-template setting. The pixel-based Dec model performs similarly to models that operate on object-based state representations, either extracted (STN) or ground truth (IN and Tx). A key advantage of Dec is that it allows for end-to-end training of the forward-prediction and task-solution models. The resulting Dec [Joint] model performs the best in our experiments, which is why we focus on it in subsequent experiments. Although the within-template experiment shows that forward prediction has potential for physical reasoning, AUCCESS plateaus after seconds. This suggests the forward-prediction models are only accurate for a short period of time. Also, forward-prediction models show little benefit in the cross-template setting, which suggests they generalize poorly across templates.

AUCCESS    AUCCESS    AUCCESS
Templates     # Objects     # Objects
Figure 6: Left: Per-template AUCCESS of Dec [Joint] 1f with (no forward prediction) and (forward prediction) of five task templates that benefit the least from forward prediction (left) and five templates that benefit the most (right). Right: Per-template AUCCESS of the model as a function of the number of objects in the task template (left) and improvement in per-template AUCCESS, called AUCCESS, obtained by the model (right).

Does better forward-prediction imply better physical reasoning? Figure 5 (left) measures the forward-prediction accuracy (FPA) of our forward-prediction models after seconds of rolling out the models. We observe that FPA generally decreases with roll-out time although, interestingly, Dec recovers over time. While all models obtain a fairly high FPA, models that utilize object-based representations (STN, IN, and Tx) clearly outperform their purely pixel-based counterparts. This is intriguing because, in prior experiments, Dec models performed best on PHYRE in terms of AUCCESS. To investigate this relation in more detail, Figure 5 (right) shows FPA averaged over 10 seconds as a function of the maximum AUCCESS over that time. The results confirm that more pixel-accurate forward-predictions do not necessarily lead to increased performance on the downstream physical-reasoning tasks in PHYRE.

How do forward-prediction agents compare to the state-of-the-art? Hitherto, all our experiments assumed access to input frames, which is not the setting considered in bakhtin2019phyre . To facilitate comparisons with prior work, we develop an Dec agent that requires only

input frame: we pad the first frame with

empty frames and train the model exclusively on roll-outs that start from the first frame and do not use teacher forcing. We refer to the resulting model as Dec [Joint] 1f. Table 1 compares the performance of Dec [Joint] 1f to the state-of-the-art on PHYRE bakhtin2019phyre in terms of AUCCESS and success percentage @ 10 (i.e., the percentage of tasks that were solved within 10 attempts). The results show that Dec [Joint] 1f outperforms the previous best reported agent (FiLM-DQN bakhtin2019phyre ) in terms of metrics in both the within and the cross-template settings. In the within-template setting, the performance of Dec [Joint] 1f increases substantially for large . This demonstrates the benefits of using a forward-prediction modeling approach to PHYRE in that setting. Having said that, forward-prediction did not help in our experiments in the cross-template setting, presumably, because our forward-prediction models do not generalize well across templates.

Which tasks benefit from using a forward-prediction model? To investigate on which task templates forward-prediction models help the most, we compare Dec [Joint] 1f at (i.e., no forward-prediction) and seconds in terms of per-template AUCCESS. We define per-template AUCCESS as the average AUCCESS over all tasks in a template in the within-template setting. Figure 6 (left) shows the per-template AUCCESS for the five templates in which forward-prediction models help the least (left five groups) and the five templates in which these models help the most (right five groups). Qualitatively, we observe that forward prediction does not help much in “simple” tasks that comprise a few objects, whereas it helps a lot in more “complex” tasks. This observation is corroborated by the results in Figure 6 (right), in which we show AUCCESS and the improvement in AUCCESS due to forward modeling ( AUCCESS) as a function of the number of objects in the task. We observe that AUCCESS decreases with the number of objects in the task, but that the benefits of forward predictions increase with the number of objects.

6 Discussion

Input Simulator STN Input Simulator STN Input Simulator STN
Action ( red ball) solving the task. Using a slightly smaller ball. Using a slightly larger ball.
Figure 7: Rollouts for three slightly different actions on the same task by: (1) the simulator and (2) a STN trained only on tasks from the corresponding task template. Although the STN produces realistic rollouts, it does not perfectly emulate the simulator. In line with Figure 1, the small variations in action change whether the action solves the task, although the learned model is unable to capture those fine differences.

While the results of our experiments demonstrate the potential of forward-prediction models in physical reasoning, they also highlight that much work is still needed for that potential to materialize. In particular, initial qualitative evaluation of our forward-prediction models suggests that they may memorize the dynamics of objects in a task without accurately modeling the object interactions. This observation may shed some light on the success of particle-based forward-prediction models li2018learning ; gonzalez2020learning , which limit the number of object configurations for which the subsequent dynamics need to be memorized. Some forward-prediction models may exhibit better generalizing properties by making additional assumptions: for example, conservation of energy or point mass objects greydanus2019hamiltonian ; cranmer2020lagrangian ; chen2019symplectic . However, such assumptions are invalid even in simple environments such as PHYRE.

In general, learning more generic forward-prediction models remains challenging because physical environments exhibit chaotic behavior. As shown in Figure 7, our learned models produce realistic predictions, though are a long way from the high fidelity rollouts needed to perfectly solve the tasks. This perhaps explains why our learned models perform significantly worse than using the simulator as the forward-prediction model. Furthermore, we limit our exploration in this work to deterministic forward-prediction models, which encompass the vast majority of popular approaches in physical reasoning li2018learning ; gonzalez2020learning ; battaglia2016interaction ; watters2017visual ; kipf2020contrastive . While PHYRE is a deterministic environment, future work may have to embrace uncertainty in forward predictions by developing strategies that coexist with the inherent uncertainty in the forecasts of forward-prediction models. Efficiently incorporating such multiple futures into task-solution models would also be an interesting future research direction.

Broader Impact

The authors do not foresee this study to have major societal consequences (neither positive nor negative). The work considers models that solve artificial physical puzzles, which appear to have few ethical aspects.

The authors thank Rob Fergus, Denis Yarats, Brandon Amos, Ishan Misra, Eltayeb Ahmed, Anton Bakhtin and the entire Facebook AI Research team for many helpful discussions.

References

  • [1] K. R. Allen, K. A. Smith, and J. B. Tenenbaum. The Tools challenge: Rapid trial-and-error learning in physical problem solving. In CogSci, 2020.
  • [2] A. Bakhtin, L. van der Maaten, J. Johnson, L. Gustafson, and R. Girshick. PHYRE: A new benchmark for physical reasoning. In NeurIPS, 2019.
  • [3] F. Baradel, N. Neverova, J. Mille, G. Mori, and C. Wolf. Cophy: Counterfactual learning of physical dynamics. In ICLR, 2020.
  • [4] P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al. Interaction networks for learning about objects, relations and physics. In NeurIPS, 2016.
  • [5] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum.

    Simulation as an engine of physical scene understanding.

    PNAS, 2013.
  • [6] J. Carreira and A. Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In CVPR, 2017.
  • [7] Z. Chen, J. Zhang, M. Arjovsky, and L. Bottou. Symplectic recurrent neural networks. arXiv preprint arXiv:1909.13334, 2019.
  • [8] K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP, 2014.
  • [9] M. Cranmer, S. Greydanus, S. Hoyer, P. Battaglia, D. Spergel, and S. Ho. Lagrangian neural networks. In ICLR Workshop, 2020.
  • [10] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NeurIPS, 2016.
  • [11] S. Greydanus, M. Dzamba, and J. Yosinski. Hamiltonian neural networks. In NeurIPS, 2019.
  • [12] O. Groth, F. B. Fuchs, I. Posner, and A. Vedaldi. Shapestacks: Learning vision-based physical intuition for generalised object stacking. In ECCV, 2018.
  • [13] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. In ICLR, 2020.
  • [14] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, 2017.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 1997.
  • [17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NeurIPS, 2015.
  • [18] M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu. Reasoning about physical interactions with object-oriented prediction and planning. In ICLR, 2019.
  • [19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [20] T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting systems. In ICML, 2018.
  • [21] T. Kipf, E. van der Pol, and M. Welling. Contrastive learning of structured world models. In ICLR, 2020.
  • [22] J. R. Kubricht, K. J. Holyoak, and H. Lu. Intuitive physics: Current research and controversies. Trends in cognitive sciences, 2017.
  • [23] A. Lerer, S. Gross, and R. Fergus. Learning physical intuition of block towers by example. In ICML, 2016.
  • [24] Y. Li, T. Lin, K. Yi, D. Bear, D. L. K. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba. Visual grounding of learned physical models. In ICML, 2020.
  • [25] Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In ICLR, 2018.
  • [26] S. Liu, T. Li, W. Chen, , and H. Li. Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In ICCV, 2019.
  • [27] M. M. Loper and M. J. Black. OpenDR: An approximate differentiable renderer. In ECCV, 2014.
  • [28] N. F. Rajani, R. Zhang, Y. C. Tan, S. Zheng, J. Weiss, A. Vyas, A. Gupta, C. Xiong, R. Socher, and D. Radev. ESPRIT: Explaining Solutions to Physical Reasoning Tasks. In ACL, 2020.
  • [29] R. Riochet, M. Y. Castro, M. Bernard, A. Lerer, R. Fergus, V. Izard, and E. Dupoux. IntPhys: A framework and benchmark for visual intuitive physics reasoning. arXiv preprint arXiv:1803.07616, 2018.
  • [30] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia. Learning to simulate complex physics with graph networks. arXiv preprint arXiv:2002.09405, 2020.
  • [31] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
  • [32] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018.
  • [33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NeurIPS, 2017.
  • [34] R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.
  • [35] N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti. Visual interaction networks: Learning a physics simulator from video. In NeurIPS, 2017.
  • [36] J. R. Weaver.

    Centrosymmetric (cross-symmetric) matrices, their basic properties, eigenvalues, and eigenvectors.

    The American Mathematical Monthly, 1985.
  • [37] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1989.
  • [38] J. Wu, E. Lu, P. Kohli, W. T. Freeman, and J. B. Tenenbaum. Learning to see physics via visual de-animation. In NeurIPS, 2017.
  • [39] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In NeurIPS, 2015.
  • [40] T. Ye, X. Wang, J. Davidson, and A. Gupta. Interpretable intuitive physics model. In ECCV, 2018.
  • [41] Y. Ye, M. Singh, A. Gupta, and S. Tulsiani. Compositional video prediction. In ICCV, 2019.
  • [42] K. Yi, C. Gan, Y. Li, P. Kohli, J. Wu, A. Torralba, and J. B. Tenenbaum. CLEVRER: Collision events for video representation and reasoning. In ICLR, 2020.

Appendix A Rollout Visualizations

Visualization of rollouts for all our forward-prediction models (as were used to compute forward-prediction accuracy), are available here:

Appendix B Rollout Accuracy in Cross-Template Setting

Similar to Figure 5 in the main paper, we show the forward-prediction accuracy on the cross-template setting in Figure 8. As expected, the accuracy is generally lower in the cross-template setting, showing that the models struggle to generalize beyond training templates. Otherwise, we see similar trends as seen in Figure 5.

It is interesting to note that Dec accuracy goes down and then up, similar to as observed in the within-template case. We find that it is likely because Dec is better able to predict the final position of the objects than the actual path the objects would take. Since it tends to smear out the object pixels when not confident of its position, the model ends up with lower accuracy during the middle part of the rollout.

FPA AUCCESS
      Seconds       FPA
Figure 8: Left: Forward-prediction accuracy (FPA; -axis) after seconds (-axis) rolling out five forward-prediction models. Right: Maximum AUCCESS value across roll-out (-axis) as a function of forward-prediction accuracy averaged over seconds (-axis) for five forward-prediction models. Shaded regions and error bars indicate standard deviation over folds. Both shown for cross-template setting.
AUCCESS     Seconds ()
Figure 9: Comparison of Conv3D-{Latent, Pixel} classifiers on learned pixel-based forward-prediction models: Dec and STN, in the within-template setting. Conv3D-Latent performs as well or better than Conv3D-Pixel, and hence we use it for the experiments in the paper.
AUCCESS     Seconds ()
Figure 10: Comparison of Tx-Cls and Conv3D-Pixel classifiers on learned object-based forward-prediction models: IN and Tx, in the within-template setting. Since Conv3D-Pixel generally performed better, we use it for the experiments shown in the paper.

Appendix C Other Task-Solution Models on Learned Forward-Prediction Models

c.1 Conv3D-Latent vs Pixel, on Pixel-Based Forward-Prediction Models

In Figure 10, we compare Conv3D-Latent and Conv3D-Pixel on learned pixel-based forward models. We find Conv3D-Latent generally performs better, especially for Dec, since that model does not produce accurate future predictions in terms of pixel accuracy (FPA). However, the latent space for that model still contains useful information, which the Conv3D-Latent is able to exploit successfully. Hence for pixel-based forward-prediction models, given the option between latent or pixel space task-solution classifiers, we choose Conv3D-Latent for experiments in the paper.

c.2 Cls-Tx vs Conv3D-Pixel, on Object-Based Forward-Prediction models

In Figure 10, we compare the object-based task-solution model on learned object-based forward-prediction models. Similar to the observations with GT simulator in Figure 4 (main paper), object-based task-solution model (Tx-Cls) performed worse than its pixel-based counterpart (Conv3D-Pixel), even with learned forward-prediction models. Hence, for experiments in the paper, we render the object-based models’ predictions to pixels, and use a pixel-based task-solution model (Conv3D-Pixel). Note that the other pixel-based task-solution model, Conv3D-Latent, is not applicable here, as object-based forward-prediction models do not produce a spatial latent representation which Conv3D-Latent operates on.

Appendix D Hyperparameters

All experiments were performed using upto 8 V100 32GB GPUs. Depending on the number of steps the model was rolled out for during training, the actual GPU requirements were adjusted. The training time for all forward-prediction models was around 2 days, and the task solution models took upto 4 days (depending on how far the forward-prediction model was rolled out). Our code will be made available to reproduce our results.

d.1 Forward-prediction models

We train all object-based models with teacher forcing. We use a batch size of 8 per GPU over 8 gpus. For each batch element, we sample clips of length 4 from the rollout, using the first three as context frames and the 4th as the ground truth prediction frame. We train for 200K iterations. We add Gaussian noise sampled from a distribution to the training data, similar to battaglia2016interaction . We add noise to 20% of the data for the first 2.5% of training, decreasing the percentage of data that is noisy to 0% over the next 10% of training. The object based forward models only make predictions for dynamic objects, and use a hard tanh to clip the predicted state values between 0-1. The models don’t use the state of static objects or the angle of ball objects when calculating the loss. For angles, we compute the mean squared error between the cosine of the predicted and ground truth angle. We now describe the other hyperparameters specific to each object-based and pixel-based forward-prediction model.

  • [leftmargin=*,noitemsep,topsep=0pt]

  • IN (object-based): We train these models using Adam and a learning rate of 0.001. We use two interaction nets, one that makes predictions based on the last two context frames, the other that makes predictions based on the first two context frames. Using the same architecture as battaglia2016interaction , the relation encoder is a five layer MLP, with hidden size 100 and ReLU activation, that embeds the relation into a dimension 50 vector. The aggregated and external effects are passed through a three layer MLP with hidden size 150 and ReLU activation. Each interaction net makes a velocity prediction for the object, and the results are concatenated with the object’s previous state and passed to a three layer MLP with hidden size 64 and ReLU activation to make the final velocity prediction per object. The predicted state is a sum of the velocity and the object’s previous state.

  • Tx (object-based): We train these models using Adam and a learning rate of 0.0001. We use a two layer MLP with a hidden size of 128 and ReLU activation to embed the objects into a 128 dimensional vector. A sinusoidal temporal position encoding vaswani2017attention of time is added to the features of each object. The result is passed to a Transformer encoder with 8 heads and 6 layers. The embeddings corresponding to the last time step are passed to the final three layer MLP with ReLU activations and hidden size of 100 to make the final prediction. The model predicts the velocity of the object, which is summed with the object’s last state to get a new state prediction.

  • STN (pixel-based):

    We also train these models with teacher forcing. Since pixel-based models involve a ResNet-18 image encoder, we use a batch size of 2 per GPU, over 8 GPUs. For each batch element, we sample clips of length 16 from the rollout, and construct all possible sets with 3 context frames and the 4th ground truth prediction frame. The models were trained using a learning rate of 0.00005, adam optimizer, with cosine annealing over 100K iterations. The scene was split into objects using the connected components algorithm, and we split each color channel into upto 2 objects. The model then predicts rotation and transformation for each object channel, which are used to construct an affine transformation matrix. The last context frame is transformed using this affine matrix to generate the predicted frame, which is passed through the image encoder to get the latent representation for the predicted frame (i.e. for

    STN, the future frame is predicted before the future latent representation).

  • Dec (pixel-based): For these models, we do not use teacher forcing, and use the last predicted states to predict future states at training time. We use a batch size of 2 per gpu over 8 gpus. For each batch element, we sample a 20-length clip from the simulator rollout, and train the model to predict upto 10 steps into the future (note with teacher forcing, models are trained only to predict 1 step into the future given 3 GT states). The model is trained for 50K iterations with a learning rate of 0.01 and SGD+Momentum optimizer.

  • Dec [Joint] (pixel-based): For this model, we train both forward-prediction and task-solution models jointly, with equally weighted losses. For this, we sample 13-length rollout, always starting from frame 0. Instead of considering all possible starting points from the 13 states (as in Dec and STN), we only use the first 3 states to bootstrap, and roll it out for upto 10 steps into the future. We only incur forward-prediction losses for upto the first 5 of those 10 steps, we observed instability in training on predicting for all steps. Here we use a batch size of 8/gpu, over 8 gpus. The model is trained with learning rate of 0.0125 with SGD+Momentum for 150K iterations. The task-solution model used is Conv3D-Latent, which operates on the latent representation being learned by the forward-prediction model.

d.2 Task-solution models

For all these models, we always sample frames from the start point of each simulation, and roll it out for different number of states autoregressively, before passing through one of these task-solution models.

  • [leftmargin=*,noitemsep,topsep=0pt]

  • Tx-Cls (for object-based): We train a Transformer encoder model on the object states predicted by object-based forward-prediction models. The object states are first encoded using a two layer MLP with ReLU activation into an embedding size of 128. A sinusoidal temporal position encoding vaswani2017attention of time is added to the features of each object. The result is passed to a Transformer encoder which has 16 heads and 8 layers and uses layer normalization. The encoding is passed to three layer MLP with hidden size 128 and ReLU activations to classify the embedding as solved or not solved. We use a batch size of 128 and train for 150K iterations with SGD optimizer, a learning rate of 0.002, and momentum of 0.9.

  • Conv3D-Latent (for pixel-based): We train a 5-layer 3D convolutional model (with ReLU in between) on the latent space learned by forward-prediction models. We use a batch size of 64 and train for 100K iterations with SGD optimizer and learning rate of 0.0125.

  • Conv3D-Pixel (for both object and pixel-based): We train a 2D + 3D convolutional model on future states rendered as pixels. We use a batch size of 64 and train for 100K iterations with SGD optimizer, learning rate of 0.0125, and momentum of 0.9. This model consists of 4 ResNet-18 blocks to encode the frames, followed by 5 3D convolutional layers over the frames’ latent representation, as used in Conv3D-Latent. When object-based models are trained with this task-solution model, we run the forward-prediction model and the renderer in the data loader threads (on CPU), and feed the predicted frames into the task-solution model (training on GPU). We found this approach to be more computationally efficient than running both forward-prediction and task-solution models on GPU, and in between the two, swapping out the data from GPU to CPU and back, to perform the rendering on CPU.