Discovering laws through observation of natural phenomenon is the central challenge of the sciences. Modern deep learning also involves discovering knowledge about the world but focuses mostly on implicit knowledge representations, rather than explicit
and interpretable ones. One reason is that the goal of deep learning is often optimizing test accuracy and learning efficiency in narrowly specified domains, while science seeks causal explanations and general-purpose knowledge across a wide range of phenomena. Here we explore an approach for imposing physically motivated inductive biases on neural networks, training them to predict the dynamics of physical systems and interpreting their learned representations and computations to discover the symbolic physical laws which govern the systems. Moreover, our results also show that this approach improves the generalization performance of the learned models.
The first ingredient in our approach is the “graph network” (GN) (Battaglia et al., 2018), a type of graph neural network (Scarselli et al., 2009; Bronstein et al., 2017; Gilmer et al., 2017), which is effective at learning the dynamics of complex physical systems (Battaglia et al., 2016; Chang et al., 2016; Sanchez-Gonzalez et al., 2018; Mrowca et al., 2018; Li et al., 2018; Kipf et al., 2018)
. We impose inductive biases on the architecture and train models with supervised learning to predict the dynamics of 2D and 3D n-body gravitational systems and a hanging string. If the trained models can accurately predict the physical dynamics of held-out test data, we can assume they have discovered some level of general-purpose physical knowledge, which is implicitly encoded in their weights. Crucially, we recognize that the forms of the graph network’s message and pooling functions have correspondences to the forms of force and superposition in classical mechanics, respectively. The message pooling is what we call a “linearized latent space:” a vector space where a latent representation of the interactions between bodies (forces or messages) are linear (summable). By imposing our inductive bias, we encourage the GN’s linearized latent space to match the true one.
The second ingredient is using symbolic regression — we use eureqa from Schmidt and Lipson (2009) — to fit compact algebraic expressions to a set of inputs and messages produced by our trained model. eureqa works by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables, and iteratively searches the space of mathematical expressions to find the model that best fits a given dataset. The resulting symbolic expressions are interpretable and readily comparable with physical laws.
The contributions of this paper are:
A modified GN with inductive biases that promote learning general-purpose physical laws.
Using symbolic regression to extract analytical physical laws from trained neural networks.
Improved zero-shot generalization to larger systems than those in training.
Graph networks are a type of deep neural network which operates on graph-structured data. The format of the graphs on which GNs operate is defined as 3-tuples111We adhere closely to the notation used in Battaglia et al. (2018) to formalize our model., , where:
is a global attribute vector of length ,
is a set of node attribute vectors, of length , and
is a set of edge attribute vectors, of length , and indices of the “receiver” and “sender” nodes connected by the -th edge.
Our GN implementation is depicted in fig. 1. Note: it does not include global and edge attributes. This GN processes a graph by first computing pairwise interactions, or “messages”, , between nodes connected by edges, with a “message function”, . Next, the set of messages incident on each -th receiver node are pooled into , where , and is a permutation-invariant operation which can take variable numbers of input vectors, such as elementwise summation. Finally, the pooled messages are used to compute node updates, , with a “node update function”, . Our specific architectural implementation is very similar to the “interaction network” (IN) variant (Battaglia et al., 2016).
The forms of , , , and the associated input and output attribute vectors have correspondences to Newton’s formulation of classical mechanics, which motivated the original development of INs. The key observation is that could learn to correspond to the force vector imposed on the -th body due to its interaction with the -th body. In our examples, the force vector is equal to the derivative of the Lagrangian: , and this could be generally imposed if one knows and manually integrates the ODE with the output of the graph net. In a general n-body gravitational system in dimensions, note that the forces are minimally represented in an vector space. Thus, if , we exploit the GN’s “linearized latent space” for physical interpretability: we encourage to be the force.
We sketch a non-rigorous proof-like demonstration of our hypothesis. Newtonian mechanics prescribes that force vectors, , can be summed to produce a net force, , which can then be used to update the dynamics of a body. Our model uses the -th body’s pooled messages, , to update the body’s velocity via Euler integration, . If we assume our trained GN is trained to predict velocity updates perfectly for any number of bodies, this means , where . We have the result for a single interaction: . Thus, we can sub into the multi-interaction case: , and so
has to be a linear transformation. Therefore, for cases whereis invertible (mapping between the same dimensional space), , and so the message vectors are linear transformations of the true forces when . We demonstrate this hypothesis on trained GNs in section 3.
We set up 100,000 random simulations for both a and force law in 2D, a law in 3D, and a string with an force law between nodes in 2D with a global gravity, for 1000 time steps each. The three n-body problems have six bodies in their training set, and the string has ten nodes, of which the two end nodes are fixed. We train a GN on each of these problems where we choose
, i.e., the length of the message vectors in the GN matches the dimensionality of the force vectors: 2 for the 2D simulations and 3 for the 3D simulations. Our GN, a pure TensorFlow(Abadi et al., 2015) model, has both and
as three-hidden-layer multilayer perceptrons (MLPs) with 128 hidden nodes per layer with ReLU activations. We optimize the L1 loss between the predicted velocity update and the true velocity update of each node.
Once we have a trained model, we record the messages, , for all bodies over 1000 new simulations for each environment. We fit a linear combination of the vector components of the true force, , to each component of , as can be seen in fig. 2 for . The results for each system show that the vectors have learned to be a linear combination of the components when . We similar linear relations for all other simulations.
We are also able to find the force law when it is unknown by using symbolic regression to fit an algebraic function that approximates . We demonstrate this on the trained GN for the problem using eureqa from Schmidt and Lipson (2009) to fit algebraic equations that fit the message. We allow it to use algebraic operators as well as input variables ( and for component separation, for distance, and for sending body mass) and real constants, with complexity for each of these operators and values. This returns a list of the models with the lowest mean square error at each complexity. We parametrize Occam’s razor to find the “best” algebraic model by taking the model that maximizes the fractional drop in mean square error over the best model at a lower complexity. The “best” model found by the symbolic regression for the first output element of is , which is a linear combination of the components of the true force, . We can see this is approximately the same linear transformation as the components in the left plot of fig. 2, but this algebraic expression was learned from scratch.
We now test whether the GN will generalize to more nodes better than a GN with a larger . This is because it is possible for a GN to “cheat” with a high dimension message-passing space, trained on a fixed number of bodies. One example of cheating would be for to concatenate each sending node’s properties along the message, and to calculate forces from these and add them. When a new body is added, this calculation might break. While it is still possible for to develop an elaborate encoding scheme with to cheat at this problem, it seems more natural for to learn the true force when and therefore show improved generalization to a greater number of nodes.
We test the hypothesis of better generalization with in fig. 3, by training GNs with different on the 3D simulations. The observed trend is that systems with see their loss blow up with a larger number of bodies — presumably because they have “cheated” slightly and not learned the force law in but in a combination of and , whereas the systems’ has learned a projection of the true forces and is able to generalize better for greater number of bodies. A conclusion of this may be that one can optimize GNs by minimizing to the known minimum dimension required to transmit information (e.g., for 3D forces), or, if this dimension is unknown, until the loss drops off.
We have demonstrated an approach for imposing physically motivated inductive biases on graph networks to learn interpretable representations and improved zero-shot generalization. We have shown through experiment that our graph network models which implement this inductive bias can learn message representations equivalent to the true force vector for n-body gravitational and spring-like simulations in 2D and 3D. We also have demonstrated a generic technique for finding an unknown force law: symbolic regression models to fit explicit algebraic equations to our trained model’s message function. Because GNs have more explicit sub-structure than their more homogeneous deep learning relatives (e.g., plain MLPs, convolutional networks), we can draw more fine-grained interpretations of their learned representations and computations. Finally, we have demonstrated that our model generalizes better at inference time to systems with more bodies than had been experienced during training.
Acknowledgments: Miles Cranmer and Rui Xu thank Professor S.Y. Kung for insightful suggestions on early work, as well as Zejiang Hou for his comments on an early presentation. Miles Cranmer would like to thank David Spergel for advice on this project, and Thomas Kipf, Alvaro Sanchez, and members of the DeepMind team for helpful comments on a draft of this paper.
Abadi et al. 
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado,
A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving,
M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg,
D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan,
F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.URL https://www.tensorflow.org/. Software available from tensorflow.org.
- Battaglia et al.  P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502–4510, 2016.
- Battaglia et al.  P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
- Bronstein et al.  M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
- Chang et al.  M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016.
- Gilmer et al.  J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263–1272. JMLR. org, 2017.
- Kipf et al.  T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting systems. arXiv preprint arXiv:1802.04687, 2018.
- Li et al.  Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566, 2018.
- Mrowca et al.  D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. F. Fei-Fei, J. Tenenbaum, and D. L. Yamins. Flexible neural representation for physics prediction. In Advances in Neural Information Processing Systems, pages 8799–8810, 2018.
- Sanchez-Gonzalez et al.  A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, and P. Battaglia. Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242, 2018.
- Scarselli et al.  F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
- Schmidt and Lipson  M. Schmidt and H. Lipson. Distilling free-form natural laws from experimental data. Science, 324(5923):81–85, 2009.