Flexible Neural Representation for Physics Prediction

06/21/2018
by   Damian Mrowca, et al.
2

Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail. Inspired by this ability, we propose a hierarchical particle-based object representation that covers a wide variety of types of three-dimensional objects, including both arbitrary rigid geometrical shapes and deformable materials. We then describe the Hierarchical Relation Network (HRN), an end-to-end differentiable neural network based on hierarchical graph convolution, that learns to predict physical dynamics in this representation. Compared to other neural network baselines, the HRN accurately handles complex collisions and nonrigid deformations, generating plausible dynamics predictions at long time scales in novel settings, and scaling to large scene configurations. These results demonstrate an architecture with the potential to form the basis of next-generation physics predictors for use in computer vision, robotics, and quantitative cognitive science.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 7

page 14

page 15

page 17

page 19

page 21

12/01/2016

A Compositional Object-Based Approach to Learning Physical Dynamics

We present the Neural Physics Engine (NPE), a framework for learning sim...
01/02/2019

Learning Generalizable Physical Dynamics of 3D Rigid Objects

Humans have a remarkable ability to predict the effect of physical inter...
04/28/2020

Visual Grounding of Learned Physical Models

Humans intuitively recognize objects' physical properties and predict th...
09/07/2018

Neural Allocentric Intuitive Physics Prediction from Real Videos

Humans are able to make rich predictions about the future dynamics of ph...
09/05/2018

Modeling human intuitions about liquid flow with particle-based simulation

Humans can easily describe, imagine, and, crucially, predict a wide vari...
12/02/2019

TeaNet: universal neural network interatomic potential inspired by iterative electronic relaxations

A universal interatomic potential applicable to arbitrary elements and s...
04/15/2019

Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces

We introduce an approach to model surface properties governing bounces i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Humans efficiently decompose their environment into objects, and reason effectively about the dynamic interactions between these objects (Spelke et al., 1992; Tenenbaum et al., 2011). Although human intuitive physics may be quantitatively inaccurate under some circumstances (McCloskey et al., 1980), humans make qualitatively plausible guesses about dynamic trajectories of their environments over long time horizons (Smith and Vul, 2013). Moreover, they either are born knowing, or quickly learn about, concepts such as object permanence, occlusion, and deformability, which guide their perception and reasoning (Spelke, 1990).

An artificial system that could mimic such abilities would be of great use for applications in computer vision, robotics, reinforcement learning, and many other areas. While traditional physics engines constructed for computer graphics have made great strides, such routines are often hard-wired and thus challenging to integrate as components of larger learnable systems. Creating end-to-end differentiable neural networks for physics prediction is thus an appealing idea. Recently,

Chang et al. (2016) and Battaglia et al. (2016) have illustrated the use of neural networks to predict physical object interactions in (mostly) 2D scenarios by proposing object-centric and relation-centric representations. Common to these works is the treatment of scenes as graphs, with nodes representing object point masses and edges describing the pairwise relations between objects (e.g. gravitational, spring-like, or repulsing relationships). Object relations and physical states are used to compute the pairwise effects between objects. After combining effects on an object, the future physical state of the environment is predicted on a per-object basis. This approach is very promising in its ability to explicitly handle object interactions. However, a number of challenges have remained in generalizing this approach to real-world physical dynamics, including representing arbitrary geometric shapes with sufficient resolution to capture complex collisions, working with objects at different scales simultaneously, and handling non-rigid objects of nontrivial complexity.

Figure 1: Predicting physical dynamics. Given past observations the task is to predict the future physical state of a system. In this example, a cube deforms as it collides with the ground. The top row shows the ground truth and the bottom row the prediction of our physics prediction network.

Several of these challenges are illustrated in the fast-moving deformable cube sequence depicted in Figure 1

. Humans can flexibly vary the level of detail at which they perceive such objects in motion: The cube may naturally be conceived as an undifferentiated point mass as it moves along its initial kinematic trajectory. But as it collides with and bounces up from the floor, the cube’s complex rectilinear substructure and nonrigid material properties become important for understanding what happens and predicting future interactions. The ease with which the human mind handles such complex scenarios is an important explicandum of cognitive science, and also a key challenge for artificial intelligence. Motivated by both of these goals, our aim here is to develop a new class of neural network architectures with this human-like ability to reason flexibly about the physical world.

To this end, it would be natural to extend the interaction network framework by representing each object as a (potentially large) set of connected particles. In such a representation, individual constituent particles could move independently, allowing the object to deform while being constrained by pairwise relations preventing the object from falling apart. However, this type of particle-based representation introduces a number of challenges of its own. Conceptually, it is not immediately clear how to efficiently propagate effects across such an object. Moreover, representing every object with hundreds or thousands of particles would result in an exploding number of pairwise relations, which is both computationally infeasible and cognitively unnatural.

As a solution to these issues, we propose a novel cognitively-inspired hierarchical graph-based object representation that captures a wide variety of complex rigid and deformable bodies (Section 3), and an efficient hierarchical graph-convolutional neural network that learns physics prediction within this representation (Section 4). Evaluating on complex 3D scenarios, we show substantial improvements relative to strong baselines both in quantitative prediction accuracy and qualitative measures of prediction plausibility, and evidence for generalization to complex unseen scenarios (Section 5).

2 Related Work

An efficient and flexible predictor of physical dynamics has been an outstanding question in neural network design. In computer vision, modeling moving objects in images or videos for action recognition, future prediction, and object tracking is of great interest. Similarly in robotics, action-conditioned future prediction from images is crucial for navigation or object interactions. However, future predictors operating directly on 2D image representations often fail to generate sharp object boundaries and struggle with occlusions and remembering objects when they are no longer visually observable (Agrawal et al., 2016; Fragkiadaki et al., 2015; Finn et al., 2016; Lerer et al., 2016; Li et al., 2016; Mottaghi et al., 2016a, b; Haber et al., 2018). Representations using 3D convolution or point clouds are better at maintaining object shape (Tran et al., 2015, 2016; Byravan and Fox, 2017; Qi et al., 2017a, b), but do not entirely capture object permanence, and can be computationally inefficient. More similar to our approach are inverse graphics methods that extract a lower dimensional physical representation from images that is used to predict physics (Kulkarni et al., 2014, 2015; Whitney et al., 2016; Watters et al., 2017; Wu et al., 2015, 2016a; Brand, 1997; Wang et al., 2018). Our work draws inspiration from and extends that of Chang et al. (2016) and Battaglia et al. (2016), which in turn use ideas from graph-based neural networks (Scarselli et al., 2009; Sutskever and Hinton, 2009; Bruna et al., 2013; Li et al., 2015; Henaff et al., 2015; Duvenaud et al., 2015; Defferrard et al., 2016; Kipf and Welling, 2016; Bronstein et al., 2017; Schlichtkrull et al., 2017). Most of the existing work, however, does not naturally handle complex scene scenarios with objects of widely varying scales or deformable objects with complex materials.

Physics simulation has also long been studied in computer graphics, most commonly for rigid-body collisions (Baraff, 2001; Coumans, 2010). Particles or point masses have also been used to represent more complex physical objects, with the neural network-based NeuroAnimator being one of the earliest examples to use a hierarchical particle representation for objects to advance the movement of physical objects (Grzeszczuk et al., 1998). Our particle-based object representation also draws inspiration from recent work on (non-neural-network) physics simulation, in particular the NVIDIA FleX engine (Macklin et al., 2014; Bender et al., 2015). However, unlike this work, our solution is an end-to-end differentiable neural network that can learn from data.

Recent research in computational cognitive science has posited that humans run physics simulations in their mind (Battaglia et al., 2013; Bates et al., 2015; Hamrick et al., 2011; Ullman et al., 2014; Hegarty, 2004). It seems plausible that such simulations happen at just the right level of detail which can be flexibly adapted as needed, similar to our proposed representation. Both the ability to imagine object motion as well as to flexibly decompose an environment into objects and parts form an important prior that humans rely on for further learning about new tasks, when generalizing their skills to new environments or flexibly adapting to changes in inputs and goals (Lake et al., 2017).

3 Hierarchical Particle Graph Representation

A key factor for predicting the future physical state of a system is the underlying representation used. A simplifying, but restrictive, often made assumption is that all objects are rigid. A rigid body can be represented with a single point mass and unambiguously situated in space by specifying its position and orientation, together with a separate data structure describing the object’s shape and extent. Examples are 3D polygon meshes or various forms of 2D or 3D masks extracted from perceptual data (Byravan and Fox, 2017; Finn et al., 2016). The rigid body assumption describes only a fraction of the real world, excluding, for example, soft bodies, cloths, fluids, and gases, and precludes objects breaking and combining. However, objects are divisible and made up of a potentially large numbers of smaller sub-parts.

Given a scene with a set of objects , the core idea is to represent each object with a set of particles . Each particle’s state at time

is described by a vector in

consisting of its position , velocity , and mass . We refer to and this vector interchangeably.

Particles are spaced out across an object to fully describe its volume. In theory, particles can be arbitrarily placed within an object. Thus, less complex parts can be described with fewer particles (e.g. 8 particles fully define a cube). More complicated parts (e.g. a long rod) can be represented with more particles. We define as the set of all particles in the observed scene.

Figure 2: Hierarchical graph-based object representation. An object is decomposed into particles. Particles (of the same color) are grouped into a hierarchy representing multiple object scales. Pairwise relations constrain particles in the same group and to ancestors and descendants.

To fully physically describe a scene containing multiple objects with particles, we also need to define how the particles relate to each other. Similar to Battaglia et al. (2016), we represent relations between particles and with -dimensional pairwise relationships . Each relationship within an object encodes material properties. For example, for a soft body represents the local material stiffness, which need not be uniform within an object. Arbitrarily-shaped objects with potentially nonuniform materials can be represented in this way. Note that the physical interpretation of is learned from data rather than hard-coded through equations. Overall, we represent the scene by a node-labeled graph where the particles form the nodes and the relations define the (directed) edges . Except for the case of collisions, different objects are disconnected components within .

The graph is used to propagate effects through the scene. It is infeasible to use a fully connected graph for propagation as pairwise-relationship computations grow with . To achieve complexity, we construct a hierarchical scene (di)graph from in which the nodes of each connected component are organized into a tree structure: First, we initialize the leaf nodes of as the original particle set . Then, we extend by a root node for each connected component (object) in . The root node states are defined as the aggregates of their leaf node states. The root nodes are connected to their leaves with directed edges and vice versa.

At this point, consists of the leaf particles representing the finest scene resolution and one root node for each connected component describing the scene at the object level. To obtain intermediate levels of detail, we then cluster the leaves

in each connected component into smaller subcomponents using a modified k-means algorithm. We add one node for each new subcomponent and connect its leaves to the newly added node and vice versa. This newly added node is then labeled as the direct ancestors for its leaves and its leaves are siblings to each other. We then connect the added intermediate nodes with each other if and only if their respective subcomponent leaves are connected. Lastly, we add directed edges from the root node of each connected component to the new intermediate nodes in that component, and remove edges between leaves not in the same cluster. The process then recurses within each new subcomponent. See Algorithm

1 in the supplementary for details.

We denote the sibling(s) of a particle by , its ancestor(s) by , its parent by , and its descendant(s) by . We define . Note that in , directed edges connect and , leaves and , and and ; see Figure 3b.

4 Physics Prediction Model

In this section we introduce our physics prediction model. It is based on hierarchical graph convolution, an operation which propagates relevant physical effects through the graph hierarchy.

4.1 Hierarchical Graph Convolutions For Effect Propagation

In order to predict the future physical state, we need to resolve the constraints that particles connected in the hierarchical graph impose on each other. We use graph convolutions to compute and propagate these effects. Following Battaglia et al. (2016), we implement a pairwise graph convolution using two basic building blocks: (1) A pairwise processing unit that takes the sender particle state , the receiver particle state and their relation as input and outputs the effect of on , and (2) a commutative aggregation operation which collects and computes the overall effect . In our case, this is a simple summation over all effects on . Together these two building blocks form a convolution on graphs as shown in Figure 3a.

Figure 3: Effect propagation through graph convolutions. a) Pairwise graph convolution . A receiver particle is constrained in its movement through graph relations with sender particle(s) . Given , and , the effect of on is computed using a fully connected neural network. The overall effect is the sum of all effects on . b) Hierarchical graph convolution . Effects in the hierarchy are propagated in three consecutive steps. (1) . Leaf particles propagate effects to all of their ancestors . (2) . Effects are exchanged between siblings . (3) . Effects are propagated from the ancestors to all of their descendants .

Pairwise processing limits graph convolutions to only propagate effects between directly connected nodes. For a generic flat graph, we would have to repeatedly apply this operation until the information from all particles has propagated across the whole graph. This is infeasible in a scenario with many particles. Instead, we leverage direct connections between particles and their ancestors in our hierarchy to propagate all effects across the entire graph in one model step. We introduce a hierarchical graph convolution, a three stage mechanism for effect propagation as seen in Figure 3b:

The first L2A (Leaves to Ancestors) stage predicts the effect of a leaf particle on an ancestor particle given , , the material property information of , and input effect on . The second WS (Within Siblings) stage predicts the effect of sibling particle on . The third A2D (Ancestors to Descendants) stage predicts the effect of an ancestor particle on a descendant particle . The total propagated effect on particle is computed by summing the various effects on that particle, where

In practice, and are realized as fully-connected networks with shared weights that receive an additional ternary input ( for L2A, for WS, and for A2D) in form of a one-hot vector.

Since all particles within one object are connected to the root node, information can flow across the entire hierarchical graph in at most two propagation steps. We make use of this property in our model.

4.2 The Hierarchical Relation Network Architecture

Figure 4: Hierarchical Relation Network. The model takes the past particle graphs as input and outputs the next states . The inputs to each graph convolutional effect module are the particle states and relations, the outputs the respective effects. processes past states, collisions, and external forces. The hierarchical graph convolutional module takes the sum of all effects, the pairwise particle states, and relations and propagates the effects through the graph. Finally, uses the propagated effects to compute the next particle states .

This section introduces the Hierarchical Relation Network (HRN), a neural network for predicting future physical states shown in Figure 4. At each time step , HRN takes a history of previous particle states and relations in the form of hierarchical scene graphs as input. dynamically changes over time as directed, unlabeled virtual collision relations are added for sufficiently close pairs of particles. HRN also takes external effects on the system (for example gravity or external forces ) as input. The model consists of three pairwise graph convolution modules, one for external forces (), one for collisions () and one for past states (), followed by a hierarchical graph convolution module that propagates effects through the particle hierarchy. A fully-connected module then outputs the next states .

In the following, we briefly describe each module. For ease of reading we drop the notation and assume that all variables are subject to this time range unless otherwise noted.

External Force Module   The external force module converts forces on leaf particles into effects .

Collision Module   Collisions between objects are handled by dynamically defining pairwise collision relations between leaf particles from one object and from another object that are close to each other (Chang et al., 2016). The collision module uses , and to compute the effects of on and vice versa. With , the overall collision effects equal

. The hyperparameter

represents the maximum distance for a collision relation.

History Module   The history module predicts the effects from past on current leaf particle states .

Hierarchical Effect Propagation Module   The hierarchical effect propagation module propagates the overall effect from external forces, collisions and history on through the particle hierarchy. corresponds to the three-stage hierarchical graph convolution introduced in Figure 3 b) which given the pairwise particle states and , their relation , and input effects , outputs the total propagated effect on each particle .

State Prediction Module   We use a simple fully-connected network to predict the next particle states . In order to get more accurate predictions, we leverage the hierarchical particle representation by predicting the dynamics of any given particle within the local coordinate system originated at its parent. The only exceptions are object root particles for which we predict the global dynamics. Specifically, the state prediction module predicts the local future delta position using the particle state , the total effect on , and the gravity as input. As we only predict global dynamics for object root particles, the gravity is only applied to these root particles. The final future delta position in world coordinates is computed from local information as .

4.3 Learning Physical Constraints through Loss Functions and Data

Traditionally, physical systems are modeled with equations providing fixed approximations of the real world. Instead, we choose to learn physical constraints, including the meaning of the material property vector, from data. The error signal we found to work best is a combination of three objectives. (1) We predict the position change between time step and independently for all particles in the hierarchy. In practice, we find that will differ in magnitude for particles in different levels. Therefore, we normalize the local dynamics using the statistics from all particles in the same level (local loss). (2) We also require that the global future delta position is accurate (global loss). (3) We aim to preserve the intra-object particle structure by imposing that the pairwise distance between two connected particles and in the next time step matches the ground truth. In the case of a rigid body this term works to preserve the distance between particles. For soft bodies, this objective ensures that pairwise local deformations are learned correctly (preservation loss).

The total objective function linearly combines (1), (2), and (3) weighted by hyperparameters and :

5 Experiments

In this section, we examine the HRN’s ability to accurately predict the physical state across time in scenarios with rigid bodies, deformable bodies (soft bodies, cloths, and fluids), collisions, and external actions. We also evaluate the generalization performance across various object and environment properties. Finally, we present some more complex scenarios including (e.g.) falling block towers and dominoes. Prediction roll-outs are generated by recursively feeding back the HRN’s one-step prediction as input. We strongly encourage readers to have a look at result examples shown in main text figures, supplementary materials, and at https://youtu.be/kD2U6lghyUE.

All training data for the below experiments was generated via a custom interactive particle-based environment based on the FleX physics engine (Macklin et al., 2014) in Unity3D. This environment provides (1) an automated way to extract a particle representation given a 3D object mesh, (2) a convenient way to generate randomized physics scenes for generating static training data, and (3) a standardized way to interact with objects in the environment through forces.222HRN code and Unity FleX environment can be found at https://neuroailab.github.io/physics/. Further details about the experimental setups and training procedure can be found in the supplement.

Figure 5: Prediction examples and ground truth. a) A cone bouncing off a plane. b) Parabolic motion of a bunny. A force is applied at the first frame. c) A cube falling on a slope. d) A cone colliding with a pentagonal prism. Both shapes were held-out. e) Three objects colliding on a plane. f) Falling block tower not trained on. g) A cloth drops and folds after hitting the floor. h) A fluid drop bursts on the ground. We strongly recommend watching the videos in the supplement.

5.1 Qualitative evaluation of physical phenomena

Rigid body kinematic motion and external forces. In a first experiment, rigid objects are pushed up, via an externally applied force, from a ground plane then fall back down and collide with the plane. The model is trained on 10 different simple shapes (cube, sphere, pyramid, cylinder, cuboid, torus, prism, octahedron, ellipsoid, flat pyramid) with 50-300 particles each. The static plane is represented using 5,000 particles with a practically infinite mass. External forces spatially dispersed with a Gaussian kernel are applied at randomly chosen points on the object. Testing is performed on instances of the same rigid shapes, but with new force vectors and application points, resulting in new trajectories. Results can be seen in supplementary Figure F.9c-d, illustrating that the HRN correctly predicts the parabolic kinematic trajectories of tangentially accelerated objects, rotation due to torque, responses to initial external impulses, and the eventual elastic collisions of the object with the floor.

Complex shapes and surfaces. In more complex scenarios, we train on the simple shapes colliding with a plane then generalize to complex non-convex shapes (e.g. bunny, duck, teddy). Figure 5b shows an example prediction for the bunny; more examples are shown in supplementary Figure F.9g-h.

We also examine spheres and cubes falling on 5 complex surfaces: slope, stairs, half-pipe, bowl, and a “random” bumpy surface. See Figure 5c and supplementary Figure F.10c-e for results. We train on spheres and cubes falling on the 5 surfaces, and test on new trajectories.

Dynamic collisions. Collisions between two moving objects are more complicated to predict than static collisions (e.g. between an object and the ground). We first evaluate this setup in a zero-gravity environment to obtain purely dynamic collisions. Training was performed on collisions between 9 pairs of shapes sampled from the 10 shapes in the first experiment. Figure 5d shows predictions for collisions involving shapes not seen during training, the cone and pentagonal prism, demonstrating HRN’s ability to generalize across shapes. Additional examples can be found in supplementary Figure F.9e-f, showing results on trained shapes.

Many-object interactions. Complex scenarios include simultaneous interactions between multiple moving objects supported by static surfaces. For example, when three objects collide on a planar surface, the model has to resolve direct object collisions, indirect collisions through intermediate objects, and forces exerted by the surface to support the objects. To illustrate the HRN’s ability to handle such scenarios, we train on combinations of two and three objects (cube, stick, sphere, ellipsoid, triangular prism, cuboid, torus, pyramid) colliding simultaneously on a plane. See Figure 5e and supplementary Figure F.10f for results.

We also show that HRN trained on the two and three object collision data generalizes to complex new scenarios. Generalization tests were performed on a falling block tower, a falling domino chain, and a bowl containing multiple spheres. All setups consist of 5 objects. See Figure 5f and supplementary Figures F.9b and F.10b,g for results. Although predictions sometimes differ from ground truth in their details, results still appear plausible to human observers.

Soft bodies. We repeat the same experiments but with soft bodies of varying stiffness, showing that HRN properly handles kinematics, external forces, and collisions with complex shapes and surfaces involving soft bodies. One illustrative result is depicted in Figure 1, showing a non-rigid cube as it deformably bounces off the floor. Additional examples are shown in supplementary Figure F.9g-h.

Cloth. We also experiment with various cloth setups. In the first experiment, a cloth drops on the floor from a certain height and folds or deforms. In another experiment a cloth is fixated at two points and swings back and forth. Cloth predictions are very challenging as cloths do not spring back to their original shape and self-collisions have to be resolved in addition to collisions with the ground. To address this challenge, we add self-collisions, collision relationships between particles within the same object, in the collision module. Results can be seen in Figure 5g and supplementary Figure F.11 and show that the cloth motion and deformations are accurately predicted.

Fluids. In order to test our models ability to predict fluids, we perform a simple experiment in which a fluid drop drops on the floor from a certain height. As effects within a fluid are mostly local, flat hierarchies with small groupings are better on fluid prediction. Results can be seen in Figure 5h and show that the fall of a liquid drop is successfully predicted when trained in this scenario.

Response to parameter variation. To evaluate how the HRN responds to changes in mass, gravity and stiffness, we train on datasets in which these properties vary. During testing time we vary those parameters for the same initial starting state and evaluate how trajectories change. In supplementary Figures F.14, F.13 and F.12 we show results for each variation, illustrating e.g. how objects accelerate more rapidly in a stronger gravitational field.

Heterogeneous materials. We leverage the hierarchical particle graph representation to construct objects that contain both rigid and soft parts. After training a model with objects of varying shapes and stiffnesses falling on a plane, we manually adjust individual stiffness relations to create a half-rigid half-soft object and generate HRN predictions. Supplementary Figure F.10h shows a half-rigid half-soft pyramid. Note that there is no ground truth for this example as we surpass the capabilities of the used physics simulator which is incapable of simulating objects with heterogeneous materials.

5.2 Quantitative evaluation and ablation

We compare HRN to several baselines and model ablations. The first baseline is a simple Multi-Layer-Perceptron (MLP) which takes the full particle representation and directly outputs the next particle states. The second baseline is the Interaction Network as defined by

Battaglia et al. (2016) denoted as fully connected graph as it corresponds to removing our hierarchy and computing on a fully connected graph. In addition, to show the importance of the , , and modules, we remove and replace them with simple alternatives. No replaces the force module by concatenating the forces to the particle states and directly feeding them into . Similarly for no , is removed by adding the collision relations to the object relations and feeding them directly through . In case of no , is simply removed and not replaced with anything. Next, we show that two input time steps improve results by comparing it with a 1 time step model. Lastly, we evaluate the importance of the preservation loss and the global loss component added to the local loss. All models are trained on scenarios where two cubes collide fall on a plane and repeatedly collide after being pushed towards each other. The models are tested on held-out trajectories of the same scenario. An additional evaluation of different grouping methods can be found in Section B of the supplement.

Figure 6: Quantitative evaluation. We compare the full HRN (global + local loss) to several baselines, namely local loss only, no preservation loss, no , no , no , 1 time step, fully connected graph and a MLP baseline. The line graphs from left to right show the mean squared error (MSE) between positions, delta positions and distance preservation accumulated over time. Our model has the lowest position and delta position error and a only slightly higher preservation error.

Comparison metrics are the cumulative mean squared error of the absolute global position, local position delta, and preserve distance error up to time step . Results are reported in Figure 6. The HRN outperforms all controls most of the time. The hierarchy is especially important, with the fully connected graph and MLP baselines performing substantially worse. Besides, the HRN without the hierarchical graph convolution mechanism performed significantly worse as seen in supplementary Figure C.4

, which shows the necessity of the three consecutive graph convolution stages. In qualitative evaluations, we found that using more than one input time step improves results especially during collisions as the acceleration is better estimated which the metrics in Figure

6 confirm. We also found that splitting collisions, forces, history and effect propagation into separate modules with separate weights allows each module to specialize, improving predictions. Lastly, the proposed loss structure is crucial to model training. Without distance preservation or the global delta position prediction our model performs much worse. See supplementary Section C for further discussion on the losses and graph structures.

5.3 Discussion

Our results show that the vast majority of complex multi-object interactions are predicted well, including multi-point collisions between non-convex geometries and complex scenarios like the bowl containing multiple rolling balls. Although not shown, in theory, one could also simulate shattering objects by removing enough relations between particles within an object. These manipulations are of substantial interest because they go beyond what is possible to generate in our simulation environment. Additionally, predictions of especially challenging situations such as multi-block towers were also mostly effective, with objects (mostly) retaining their shapes and rolling over each other convincingly as towers collapsed (see the supplement and the video). The loss of shape preservation over time can be partially attributed to the compounding errors generated by the recursive roll-outs. Nevertheless, our model predicts the tower to collapse faster than ground truth. Predictions also jitter when objects should stand absolutely still. These failures are mainly due to the fact that the training set contained only interactions between fast-moving pairs or triplets of objects, with no scenarios with objects at rest. That it generalized to towers as well as it did is a powerful illustration of our approach. Adding a fraction of training observations with objects at rest causes towers to behave more realistically and removes the jitter overall. The training data plays a crucial role in reaching the final model performance and its generalization ability. Ideally, the training set would cover the entirety of physical phenomena in the world. However, designing such a dataset by hand is intractable and almost impossible. Thus, methods in which a self-driven agent sets up its own physical experiments will be crucial to maximize learning and understandingHaber et al. (2018).

6 Conclusion

We have described a hierarchical graph-based scene representation that allows the scalable specification of arbitrary geometrical shapes and a wide variety of material properties. Using this representation, we introduced a learnable neural network based on hierarchical graph convolution that generates plausible trajectories for complex physical interactions over extended time horizons, generalizing well across shapes, masses, external and internal forces as well as material properties. Because of the particle-based nature of our representation, it naturally captures object permanence identified in cognitive science as a key feature of human object perception 

(Spelke et al., 1992).

A wide variety of applications of this work are possible. Several of interest include developing predictive models for grasping of rigid and soft objects in robotics, and modeling the physics of 3D point cloud scans for video games or other simulations. To enable a pixel-based end-to-end trainable version of the HRN for use in key computer vision applications, it will be critical to combine our work with adaptations of existing methods (e.g. (Wu et al., 2016b; Kipf et al., 2018; Fan et al., 2017)) for inferring initial (non-hierarchical) scene graphs from LIDAR/RGBD/RGB image or video data. In the future, we also plan to remedy some of HRN’s limitations, expanding the classes of materials it can handle to including inflatables or gases, and to dynamic scenarios in which objects can shatter or merge. This should involve a more sophisticated representation of material properties as well as a more nuanced hierarchical construction. Finally, it will be of great interest to evaluate to what extent HRN-type models describe patterns of human intuitive physical knowledge observed by cognitive scientists (McCloskey et al., 1980; Piloto et al., 2018; Riochet et al., 2018).

Acknowledgments

We thank Viktor Reutskyy, Miles Macklin, Mike Skolones and Rev Lebaredian for helpful discussions and their support with integrating NVIDIA FleX into our simulation environment. This work was supported by grants from the James S. McDonnell Foundation, Simons Foundation, and Sloan Foundation (DLKY), a Berry Foundation postdoctoral fellowship (NH), the NVIDIA Corporation, ONR - MURI (Stanford Lead) N00014-16-1-2127 and ONR - MURI (UCLA Lead) 1015 G TA275.

References

Appendix A Iterative hierarchical grouping algorithm

We describe the iterative grouping algorithm used to generate our hierarchical particle-based object representation in Algorithm 1:

input : Scene graph with particles and relations and target cluster size
output : Hierarchical scene graph
begin
       Initialize and ;
       for connected component (object)  do
             Initialize and ;
             Create root particle ;
             Connect to with relations
             ;
             Connect to with relations
             ;
             Add relations to ;
             Initialize the particle processing queue ;
             while  not empty do
                   Get current particle ;
                   Initialize processed subcomponent indexes ;
                   if  then
                         Use k-means to group into subcomponents ;
                         for subcomponent  do
                               if  then
                                     Create new root particle for subcomponent ;
                                     Connect all to with relations ;
                                     Connect to all with relations ;
                                     Connect all to with relations ;
                                     Add relations to ;
                                     Add to ;
                                     Add to ;
                                     Append to processing queue ;
                                    
                               end if
                              else
                                     Add to ;
                                    
                               end if
                              
                         end for
                        
                   end if
                  else
                         Set ;
                        
                   end if
                  Connect all particle pairs and in with ;
                   Add to ;
                  
             end while
            Add relations to ;
             Add particles to ;
            
       end for
      Return ;
      
end
Algorithm 1 Iterative hierarchical grouping algorithm.

Appendix B Comparison of different grouping methods

While performing a hyperparamter search we also tried several different grouping methods. Here, we compare agglomerative clustering against different versions of k-means. Specifically, we tried to generate hierarchies with up to 8 particles and 10 particles per group grouped by k-means. As seen in Figure B.1 and Figure B.2 we found that k-means with 8 particle groups works best resulting in a reasonable trade-off between number of particles per group and number of hierarchical layers for the tested objects. However, the improvement over the other clustering algorithms is minor, indicating that HRN is robust to the grouping method.

Figure B.1: Qualitative comparison of different grouping methods. Agglomerative grouping (top) is compared against k-means with up to 10 particles per group (middle), and k-means with up to 8 particles per group (bottom) which is used in HRN.
Figure B.2: Quantitative comparison of different grouping methods. Agglomerative grouping (yellow) is compared against k-means with up to 10 particles per group (green), and k-means with up to 8 particles per group (blue) which is used in HRN.

Appendix C Comparison of different losses and graph structures

This section complements the quantitative evaluation and ablation studies. Figure C.3 compares the predictions of models trained with the different loss terms. Results of a model trained with a combination of global and local losses are visually closest to ground truth. These qualitative results align well with the quantitative results in Figure C.4 and Figure 6.

Figure C.4 also illustrates the importance of a hierarchical graph (global + local loss) compared to a sparse flat graph or a fully connected graph. While the fully connected graph performs worse than the sparse flat graph and the hierarchical graph on all metrics, the sparse flat graph is comparable to the hierarchical graph on the position and delta position MSE. However, the sparse flat graph does much worse on the preserve distance MSE, indicating that the original object shape is hardly preserved. Presumably, the effect propagation in the sparse flat graph is less effective than in the hierarchical graph leading to acceptable particle positions but deformed objects.

Summarizing, a better performance on the quantitative metrics (position MSE, delta position MSE and preserve distance MSE) indeed results in qualitatively better examples. Our final combination of global and local loss terms outperforms each individual loss on its own. Similarly, our hierarchical graph significantly improves predictions compared to a sparse flat graph or a fully connected graph.

Figure C.3: Qualitative comparison of different loss terms. Combining global and local loss terms (top) results in predictions closest to the ground truth (bottom) compared with using no preservation loss, a local loss or global loss by itself.
Figure C.4: Quantitative comparison of different losses and graph structures. Losses and graph structure are ablated from left to right. In terms of losses, the full HRN with global + local loss (blue) is compared against local loss only (green), global loss only (red) and a loss without a preserve distance term (yellow). Regarding graph structure, the full HRN (blue) is compared against a sparse flat graph in which the hierarchy was removed (purple) and a fully connected graph structure (black) as presented in Battaglia et al. [2016].

Appendix D Implementation details

d.1 Detailed model structure

The HRN is given the states , the gravity and any external forces . It is trained to predict the future particle states for each object . In our implementation, the model actually predicts the change in local position , and use to advance the particle states. Note that is the set of all particle positions in .

Figure D.5 shows a detailed overview of HRN model architecture. In total, there are five modules, each with their own MLP. The dotted box denotes shared weights between the three hierarchical graph convolution stages, , , and

. All MLPs use a ReLU nonlinearity. The number of units, layers, and output dimension of each MLP were chosen through a hyperparameter search. The gravity input

to is only added for the global super-particles of each object.

Figure D.5: Detailed description of the HRN model architecture.

d.2 Training procedure

We train the network using the Adam optimizer with a batch size of 256 across multiple Nvidia Titan Xp GPUs. The initial learning rate was set at 0.001 and decayed stepwise a total of 3 times, alternating between a factor of 2 and 5 each step. We used TensorFlow for the implementation. For the generalization experiments we include data augmentation in the form of random grouping, mass, and translation.

Appendix E Detailed experimental setups

e.1 Particle-based physics simulation environment

Based on the FleX physics engine [Macklin et al., 2014] we built a custom interactive particle-based environment in Unity3D. This environment automatically decomposes any given 3D object mesh into a particle representation using the FleX API. On top of this representation it provides a convenient way to generate randomized physics scenes for generating static training data. The user is able to construct random scenes through a python interface that communicates with Unity3D. This interfaces also allows for physical interactions with objects within a defined scene. For instance, one can apply forces to a whole object or individual particles to generate translational and rotational position variations. It is both possible to generate static datasets from the environment and to train offline as well as to train and interact with the environment online. Therefore the environment sends the python script client the particle state at every frame as well as images captured by a camera in the scene. Scenes can be rendered with around 30 frames per second. The simulation time increases with the number of particles. Figure E.6 shows a screenshot of the environment embedded in the Unity3D editor. Mesh skins are used to mask the particles in the main scene to give the impression of a continuous object. In the lower right of this screenshot we can see the particle representation of the cube in the scene after FleX has converted the 3D mesh into a particle representation. Code for this environment, along with the entire HRN code base, can be found at https://neuroailab.github.io/physics/.

Figure E.6: Particle-based Interaction Environment in Unity3D. Screenshot of the Unity Editor with FleX Plugin. In the main scene a cube is colliding with a planar surface. The lower right shows the particle representation of the cube. This environment is used to generate training and validation data through interactions with objects in the scene. Interactions with the environment are possible through a python interface.

e.2 Shapes and surfaces used during experiments

Figure E.7 and Figure E.8 show the 3D mesh and the leaf particle representation of all shapes and surfaces used during training or testing. Moving objects consist of 50-300 particles, surfaces of more than 5000 particles. Only one particle resolution is shown although multiple levels of detail in the leaf node representation are possible by changing the particle spacing within an object.

Figure E.7: Dynamic shapes and particle representations. All shapes used during testing and training are shown. Shapes consist of 50 - 300 particles. Only one particle resolutions is shown.
Figure E.8: Surfaces and particle representations. All surfaces used during testing and training are shown. Surfaces consist of 5000 - 7000 particles.

e.3 Throwing one object in the air

In this experiment any one of the small shapes depicted in Figure E.7 is first chosen to collide with one of the surfaces in Figure E.8. The small shape is teleported to a random location around the center of the surface. The stiffness is randomly chosen per object after a teleport. As the simulation starts the shape falls on the surface and collides with it. Every random number of frames we apply a randomly upward and perpendicular to the surface pointing force to lift the object up and watch it fall again as it describes a parabola. If the object leaves the surface boundaries we randomly teleport it back to the center. After a fixed number of steps we reinitialize the scene and the whole simulation procedure starts again.

e.4 Cloths

Two different experiments are performed to test our model on predicting the motion of a cloth. The first experiment is similar to throwing an object in the air. A loose cloth is teleported to a random location above the ground. On simulation start the cloth drops on the ground. Then, every fixed number of frames we apply a random force dispersed by a Gaussian kernel to the cloth and watch it deform. After a fixed number of steps we reinitialize the scene and the whole simulation procedure starts again. In the second experiments, we attach two corners of the cloth to a random location in the air. Every fixed number of steps a random force is applied to the cloth which deforms the cloth and makes it swing back and forth. The scene is reset after a fixed number of frames and the two cloth corners are attached at a new random location.

e.5 Fluids

In the fluid experiment a cube shaped fluid is teleported to a random location around the center of the ground. As the simulation starts the fluid drops on the ground and disperses on contact with the ground. The fluid’s surface tension holds it together such that fluid particles cluster in one or few water puddles. After a set number of frames the fluid is reset to its original cube-like shape and teleported to the next random location.

e.6 Collisions between objects without gravity

This experimental setup is very similar to throwing an object in the air with the difference that gravity is disabled, and we choose two small dynamic shapes that collide with each other in the air. The stiffness is randomly chosen per object after a teleport. Forces are applied such that they either point directly from one object to the other or away from each other. The force magnitude and perturbations to the force direction are randomly chosen every time an action is applied. Forces are applied randomly either to one or both objects at the same time. The simulation is reinitialized if any of the two objects leaves the room boundaries.

e.7 Collisions between objects on a planar surface

This experiment is a combination of the previous two experiments. Just as in throwing one object in the air the two or three chosen small objects are spawned randomly around the center of the planar surface. The stiffness is randomly chosen per object after a teleport. They fall and collide with the plane. Similar to collisions between objects without gravity the force is applied such that the two objects collide with each other or are torn apart. The force magnitude and perturbations to the force direction are chosen randomly. Forces are applied randomly either to one or two objects at the same time. The scene is reinitialized if any of the two objects leaves the surface boundaries.

e.8 Stacked tower

In this experiment we manually construct a tower consisting of 5 stacked rigid cubes on a planar surface. The positions of the cubes are slightly randomly perturbed to create towers of variable stability. After a random number of frames a force is applied to a randomly chosen cube which is usually big enough to make the tower fall. Once the tower falls and the cubes do not move anymore or after a maximum number of time steps the setup is reset and repeated.

e.9 Dominoes

Similar to the stacked tower, we manually setup a scene in which a rigid dominoes chain is placed on top of a planar surface. Small random perturbations are applied to the initial position of each domino. After a random number of frames a force is applied to one or both sides of the chain to make it fall. Once dominoes do not move anymore or after a fixed maximum number of time steps the setup is reset and repeated.

e.10 Balls in bowl

The last manually constructed control example are 5 balls dropping into a big bowl. The spheres are teleported to a randomly chosen position above the bowl. The balls then drop into the ball and interact with each other. A random force is applied every random number of frames. Once the spheres have settled or after a maximum number of time steps we reinitialize the scene.

Appendix F Qualitative prediction examples

This section showcases additional qualitative prediction examples. Figure F.9 and Figure F.10 show additional examples with different objects and physical setups and failure cases. Figure F.11 visualizes additional cloth predictions.

In Figure F.12 we demonstrate the model’s ability to handle varying stiffness inputs. The network is trained on multiple soft bodies of varying stiffness. The stiffness values are obtained from FleX during dataset generation and vary between 0.1 and 0.9 for soft bodies. By manually changing the input stiffness during testing, we can produce predictions of objects with varying levels of rigidity. The decreasing level of deformation in frame , from top to bottom, is consistent with the increasing stiffness.

We also test whether the model can capture physical relationships in varying gravitational fields. Since the value of gravity is also an input to our model, we can train on data with a changing gravitational constant. Figure F.13 shows an example with four different gravitational constants, ranging from 1 to 20 . As expected, the object falls faster with more gravity.

As part of the particle state, we include the mass of each particle. While the total object mass is usually kept constant for most of the experiments, we test the case of varying mass by training on a dataset where the each object’s mass will vary by a factor of up to three times. In Figure F.14 we manually increase the mass of one of the two objects in the collision and show that the heavier object is displaced less after the collision.

Figure F.9: Qualitative comparison of HRN predictions vs ground truth. a) A sphere falling out of a bowl. Objects containing other objects can be easily modeled. b) Five spheres fall into a ball and collide with each other. Complex indirect collisions occur. c) A rigid pyramid colliding with the floor. d) A rigid sphere colliding with the floor. e) A cylinder colliding with a pyramid. f) Ellipsoid and octahedron colliding with each other. g) A soft teddy colliding with the floor. h) A soft duck colliding with the floor.
Figure F.10: Qualitative comparison of HRN predictions vs ground truth. a) A very deformable stick. The ground truth shape had to be fed into the model for this prediction to work. b) Falling dominoes. HRN wrongly predicts one domino moving off to the side in this complex multi-object interaction scenario. c) A rigid cube colliding with stairs. d) A cube colliding with a random surface. e) A ball on a slope. f) Three objects colliding with each other. g) A slowly falling tower. The tower in the HRN prediction collapses much faster compared to ground truth. h) A half-rigid (right object side) half-soft (left object side) body colliding with a planar surface. The soft part deforms. The rigid part does not deform.
Figure F.11: Qualitative comparison of HRN predictions vs ground truth. a) Dropping cloth. Cloth drops from a certain height onto the ground. b) Hanging cloth. Cloth is fixated at two points and swings back and forth.
Figure F.12: Responsiveness to stiffness variations. We vary the stiffness of a cube colliding with a planar surface. The top row shows a soft cube with stiffness value 0.1, the middle row a stiffness value of 0.5, and the bottom row a almost rigid cube with a stiffness value of 0.9. Our network responds as expected to the changing stiffness value deforming the soft cube stronger than the rigid cube.
Figure F.13: Responsiveness to gravity variations. We vary the gravity while a soft cube is falling on a planar surface. From top to bottom gravity values of 1.0, 5.0, 10.0 and 20.0 are depicted. We can see that the cube falls faster as gravity increases and even deforms the object when colliding with the floor under strong gravity. Our model behaves as expected when gravity changes.
Figure F.14: Responsiveness to mass variations. We vary the mass while two cubes collide. The top shows a scenario where the purple cube is heavy. Here the green cube bounces off stronger than the purple one. The bottom shows the same scenario but with green cube being heavy. Here the green cube doesn’t move as strongly.