Discovering Symbolic Models from Deep Learning with Inductive Biases

06/19/2020 ∙ by Miles Cranmer, et al. ∙ 12

We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

symbolic_deep_learning

Code for "Discovering Symbolic Models from Deep Learning with Inductive Biases"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.—Eugene Wigner “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” ([1]).

For thousands of years, science has leveraged models made out of closed-form symbolic expressions, thanks to their many advantages: algebraic expressions are usually compact, present explicit interpretations, and generalize well. However, finding these algebraic expressions is difficult. Symbolic regression is one option: a supervised machine learning technique that assembles analytic functions to model a given dataset. However, typically one uses genetic algorithms—essentially a brute force procedure as in

[2]—which scale exponentially with the number of input variables and operators. Many machine learning problems are thus intractable for traditional symbolic regression.

On the other hand, deep learning methods allow efficient training of complex models on high-dimensional datasets. However, these learned models are black boxes, and difficult to interpret. Furthermore, generalization is difficult without prior knowledge about the data imposed directly on the model. Even if we impose strong inductive biases on the models to improve generalization, the learned parts of networks typically are linear piece-wise approximations which extrapolate linearly (if using ReLU as activation

[3]).

Here, we propose a general framework to leverage the advantages of both deep learning and symbolic regression. As an example, we study Graph Networks (GNs or GNNs) [4] as they have strong and well-motivated inductive biases that are very well suited to problems we are interested in. Then we apply symbolic regression to fit different internal parts of the learned model that operate on reduced size representations. The symbolic expressions can then be joined together, giving rise to an overall algebraic equation equivalent to the trained GN. Our work is a generalized and extended version of that in [5].

We apply our framework to three problems—rediscovering force laws, rediscovering Hamiltonians, and a real world astrophysical challenge—and demonstrate that we can drastically improve generalization, and distill plausible analytical expressions. We not only recover the injected closed-form physical laws for Newtonian and Hamiltonian examples, but we also derive a new interpretable closed-form analytical expression that can be useful in astrophysics.

2 Framework

Our framework can be summarized as follows. (1) Engineer a deep learning model with a separable internal structure that provides an inductive bias well matched to the nature of the data. Specifically, in the case of interacting particles, we use Graph Networks as the core inductive bias in our models. (2) Train the model end-to-end using available data. (3) Fit symbolic expressions to the distinct functions learned by the model internally. (4) Replace these functions in the deep model by the symbolic expressions. This procedure with the potential to discover new symbolic expressions for non-trivial datasets is illustrated in fig. 1.

Figure 1: A cartoon depicting how we extract physical equations from a dataset.

Particle systems and Graph Networks.

In this paper we focus on problems that can be well described as interacting particle systems. Nearly all of the physics we experience in our day-to-day life can be described in terms of interactions rules between particles or entities, so this is broadly relevant. Recent work has leveraged the inductive biases of Interaction Networks (INs) [6] in their generalized form, the Graph Network, a type of Graph Neural Network [7, 8, 9], to learn models of particle systems in many physical domains [6, 10, 11, 12, 13, 14, 15, 16].

Therefore we use Graph Networks (GNs) at the core of our models, and incorporate into them physically motivated inductive biases appropriate for each of our case studies. Some other interesting approaches for learning low-dimensional general dynamical models include [17, 18, 19]. Other related work which studies the physical reasoning abilities of deep models include [20, 21, 22].

Internally, GNs are structured into three distinct components: an edge model, a node model, and a global model, which take on different but explicit roles in a regression problem. The edge model, or “message function,” which we denote by , maps from a pair of nodes (

) connected by an edge in a graph together with some vector information for the edge, to a message vector. These message vectors are summed element-wise for each receiving node over all of their sending nodes, and the summed vector is passed to the node model. The node model,

, takes the receiving node and the summed message vector, and computes an updated node: a vector representing some property or dynamical update. Finally, a global model aggregates all messages and all updated nodes and computes a global property. , ,

are usually approximated using multilayer-perceptrons, making them differentiable end-to-end. More details on GNs are given in the appendix. We illustrate the internal structure of a GN in

fig. 2.

Figure 2: An illustration of the internal structure of the graph neural network we use in some of our experiments. Note that the comparison to Newtonian mechanics is purely for explanatory purposes, but is not explicit. Differences include: the “forces” (messages) are often high dimensional, the nodes need not be physical particles, and are arbitrary learned functions, and the output need not be an updated state. However, the rough equivalency between this architecture and physical frameworks allows us to interpret learned formulas in terms of existing physics.

GNs are the ideal candidate for our approach due to their inductive biases shared by many physics problems. (a) They are equivariant under particle permutations. (b) They are differentiable end-to-end and can be trained efficiently using gradient descent. (c) They make use of three separate and interpretable internal functions , , , which are our targets for the symbolic regression. GNs can also be embedded with additional symmetries as in [23, 24], but we do not implement these.

Symbolic regression.

After training the Graph Networks, we use the symbolic regression package eureqa [2] to perform symbolic regression and fit compact closed-form analytical expressions to , , and independently. eureqa works by using a genetic algorithm to combine algebraic expressions stochastically. The technique is analogous to natural selection, where the “fitness” of each expression is defined in terms of simplicity and accuracy. The operators considered in the fitting process are ^ as well as real constants. After fitting expressions to each part of the graph network, we substitute the expressions into the model to create an alternative analytic model. We then refit any parameters in the symbolic model to the data a second time, to avoid the accumulated approximation error. Further details are given in the appendix. There are many other alternative approaches to eureqa, including [25, 26, 27, 28, 29, 30]. An alternative way of interpreting GNNs is given in [31].

A key advantage of fitting a symbolic model on internal GN functions is that the symbolic regression never needs to consider more than two particles at once. This makes the symbolic regression problem tractable.

Compact internal representations.

While training, we encourage the model to use compact internal representations for latent hidden features (e.g., messages) by adding regularization terms to the loss (we investigate using L and KL penalty terms with a fixed prior, see more details in the Appendix). One motivation for doing this is based on Occam’s Razor

: science always prefers the simpler model or representation of two which give similar accuracy. Another stronger motivation is that if there is a law that perfectly describes a system in terms of summed message vectors in a compact space (what we call a linear latent space), then we expect that a trained GN, with message vectors of the same dimension as that latent space, will be mathematical rotations of the true vectors. We give a mathematical explanation of this reasoning in the appendix, and emphasize that while it may seem obvious now, our work is the first to demonstrate it. More practically, by reducing the size of the latent representations, we can filter out all low-variance latent features without compromising the accuracy of the model, and vastly reducing the dimensionality of the hidden vectors. This makes the symbolic regression of the internal models more tractable.

Implementation details.

We write our models with PyTorch

[32] and PyTorch Geometric[33]. We train them with a decaying learning schedule using Adam [34]. The symbolic regression technique is described in section 4.1. More details are provided in the Appendix.

3 Case studies

In this section we present three specific case studies where we apply our proposed framework using additional inductive biases.

Newtonian dynamics.

Newtonian dynamics describes the dynamics of particles according to Newton’s law of motion: the motion of each particle is modeled using incident forces from nearby particles, which change its position, velocity and acceleration. Many important forces in physics (e.g., gravitational force ) are defined on pairs of particles, analogous to the message function of our Graph Networks. The summation that aggregates messages is analogous to the calculation of the net force on a receiving particle. Finally, the node function, , acts like Newton’s law: acceleration equals the net force (the summed message) divided by the mass of the receiving particle.

To train a model on Newtonian dynamics data, we train the GN to predict the instantaneous acceleration of the particle against that calculated in the simulation. While Newtonian mechanics inspired the original development of INs, never before has an attempt to distill the relationship between the forces and the learned messages been successful. When applying the framework to this Newtonian dynamics problem (as illustrated in fig. 1), we expect the model trained with our framework to discover that the optimal dimensionality of messages should match the number of spatial dimensions. We also expect to recover algebraic formulas for pairwise interactions, and generalize better than purely learned models. We refer our readers to section 4.1 and the Appendix for more details.

Hamiltonian dynamics.

Hamiltonian dynamics defines a single scalar function for an entire system, corresponding to the total energy of the system, , that determines the derivatives of the canonical position () and momentum (). In short, Hamilton’s equations can be described as following: and

For this model we will be using a Hamiltonian Neural Network inductive bias [35, 36], combining it with the GN inductive bias [37], which is called a Hamiltonian Graph Network (HGN). Specifically, the global model of the GN will output a single scalar value for the entire system representing the energy, and hence the GN will have the same functional form as a Hamiltonian. By then taking the partial derivatives of the GN-predicted with respect to the position and momentum, and , respectively, of the input nodes, we will be able to calculate the updates to the momentum and position via Hamilton equations.

Now, we impose a slightly different inductive bias on the global model than in [37], and name this the “Flattened HGN” or FlatHGN: instead of learning we fix it to be the sum of a pairwise interaction term, , corresponding to the aggregated messages for the full graph, and a per-particle term, , corresponding the aggregated updated nodes. This is a Hamiltonian version of the Lagrangian Graph Network in [38], and is similar to [39]. This is still general enough to express many physical systems, as nearly all of physics can be written as summed interaction energies, but could also be relaxed in the context of the framework.

Even though the model is trained end-to-end, we expect our framework to allow us to extract analytical expressions for both the per-particle kinetic energy, and the scalar pairwise potential energy. We refer our readers to our section 4.2 and the Appendix for more details.

Dark matter halos for cosmology.

We also apply our framework to a dataset generated from state-of-the-art dark matter simulations [40]. We predict a property (“overdensity”) of a dark matter blob (called a “halo”) from the properties (positions, velocities, masses) of halos nearby. We would like to extract this relationship as an analytic expression so we may interpret it theoretically. This problem differs from the previous two use cases in many ways, including (1) it is a real-world problem where an exact analytical expression is unknown; (2) the problem does not involve dynamics, rather, it is a regression problem on a static dataset; and (3) the dataset is not made of particles, but rather a grid of density that has been grouped and reduced to handmade features. Similarly, we do not know the dimensionality of interactions, should a linear latent space exist. We rely on our inductive bias to find the optimal dimensional of the problem, and then yield an interpretable model that performs better than existing analytical approximations. We refer our readers to our section 4.3 and the Appendix for further details.

4 Experiments & results

Figure 3: A diagram showing how we implement and exploit our inductive bias on GNs. A video of this figure during training can be seen by going to the URL https://github.com/MilesCranmer/symbolic_deep_learning/blob/master/video_link.txt.

4.1 Newtonian dynamics

We train our Newtonian dynamics GNs on data for simple N-body systems with known force laws. We then apply our technique to recover the known force laws via the representations learned by the message function .

Data.

The dataset consists of N-body particle simulations in two and three dimensions, under different interaction laws. We used the following forces: (a) orbital force: ; (b) orbital force ; (c) charged particles force ; (d) damped springs with potential and damping proportional and opposite to speed; (e) discontinuous forces, , switching to force for ; and (f) springs between all particles, a potential. The simulations themselves contain masses and charges of 4 or 8 particles, with positions, velocities, and accelerations as a function of time. Further details of these systems are given in the appendix, with example trajectories shown in fig. 4.

Model training.

The models are trained to predict instantaneous acceleration for every particle given the current state of the system. To investigate the importance of the size of the message representations for interpreting the messages as forces, we train our GN using 4 different strategies: 1. Standard, a GN with 100 message components; 2. Bottleneck, a GN with the number of message components matching the dimensionality of the problem (2 or 3); 3. L, same as “Standard” but using a L regularization loss term on the messages with a weight of ; and 4. KL same as “Standard” but regularizing the messages using the Kullback-Leibler (KL) divergence with respect to Gaussian prior. Both the L and KL strategies encourage the network to find compact representations for the message vectors. We optimize the mean absolute loss between the predicted acceleration and the true acceleration of each node. Additional training details are given in the appendix and found in the codebase.

Performance comparison.

To evaluate the learned models, we generate a new dataset from a different random seed. We find that the model with L regularization has the greatest prediction performance in most cases (see table 3). It is worth noting that the bottleneck model, even though it has the correct dimensionalty, performs worse than the model using L regularization. We speculate this is because the low-dimensionality of the messages at the start of training makes it harder to solve the optimization process via gradient descent in a fixed training time.

Interpreting the message components.

As a first attempt to interpret the information in the message components, we pick the message features (where is the dimensionality of the simulation) with the highest variance (or KL divergence), and fit each to a linear combination of the true force components. We find that while the GN trained in the Standard setting does not show strong correlations with force components (also seen in fig. 5), all other models for which the effective message size is constrained explicitly (bottleneck) or implicitly (KL or L) to be low dimensional yield messages that are highly correlated with the true forces (see table 1 which indicates the fit errors with respect to the true forces), with the model trained with L regularization showing the highest correlations. An explicit demonstration that the messages in a graph network learn forces has not been observed before our work.

The messages in these models are thus explicitly interpretable as forces. The video at https://github.com/MilesCranmer/symbolic_deep_learning/blob/master/video_link.txt (fig. 3) shows a fit of the message components over time during training, showing how the model discovers a message representation that is highly correlated with a rotation of the true force vector in an unsupervised way.

Sim. Standard Bottleneck L KL
Charge-2
Charge-3
-2
-3
-2
-3
Spring-2
Spring-3
Table 1: The normalized mean square error of a fit of a linear combination of true force components to the message components for a given model (see text). An example of this comparison is shown in fig. 3. A small number indicates that the messages are strongly correlated with the true force vectors.

Symbolic regression on the internal functions.

We now demonstrate symbolic regression to extract force laws from the messages, without using prior knowledge for each force’s form. To do this, we record the most significant message component of , which we refer to as , over random samples of the training dataset. The inputs to the regression are (mass, charge, x-position of receiving and sending node) as well as simplified variables to help the symbolic regression: e.g., for displacement, and for distance.

We then use eureqa to fit the to the inputs by minimizing the mean absolute error (MAE) over various analytic functions. Analogous to Occam’s razor, we find the “best” algebraic model by asking eureqa to provide multiple candidate fits at different complexity levels (where complexity is scored as a function of the number and the type of operators, constants and input variables used), and select the fit that maximizes the fractional drop in mean absolute error (MAE) over the increase in complexity from the next best model.

From this, we recover many analytical expressions that are equivalent to the simulated force laws ( indicate learned constants):

  • Spring, 2D, L (expect ).

  • , 3D, Bottleneck (expect ).

  • Discontinuous, 2D, L (expect ).

Note that reconstruction does not always succeed, especially for training strategies other than L or bottleneck models that cannot successfully find compact representations of the right dimensionality (see some examples in Appendix).

4.2 Hamiltonian dynamics

Using the same datasets from the Newtonian dynamics case study, we also train our “FlatHGN,” with the Hamiltonian inductive bias, and demonstrate that we can extract scalar potential energies, rather than forces, for all of our problems. For example, in the case of charged particles, with expected potential (), symbolic regression applied to the learned message function yields111We have removed constant terms that don’t depend on the position or momentum as those are just arbitrary offsets in the Hamiltonian which don’t have an impact on the dynamics. See Appendix for more details.:

It is also possible to fit the per-particle term , however, in this case the same kinetic energy expression is recovered for all systems. In terms of performance results, the Hamiltonian models are comparable to that of the L regularized model across all datasets (See Supplementary results table).

Note that in this case, by design, the “FlatHGN“ has a message function with a dimensionality of 1 to match the output of the Hamiltonian function which is a scalar, so no regularization is needed, as the message size is directly constrained to the right dimension.

4.3 Dark matter halos for cosmology

Now, one may ask: “will this strategy also work for general regression problems, non-trivial datasets, complex interactions, and unknown laws?” Here we give an example that satisfies all four of these concerns, using data from a gravitational simulation of the Universe.

Cosmology studies the evolution of the Universe from the Big Bang to the complex galaxies and stars we see today [41]. The interactions of various types of matter and energy drive this evolution. Dark Matter alone consists of 85% of the total matter in the Universe [42, 43], and therefore is extremely important for the development of galaxies. Dark matter particles clump together and act as gravitational basins called “halos” which pull regular baryonic matter together to produce stars, and form larger structures such as filaments and galaxies. It is an important question in cosmology to predict properties of dark matter halos based on their “environment,” which consist of other nearby dark matter halos. Here we study the following problem: how can we predict the excess amount of matter (in comparison to its surroundings, ) for a dark matter halo based on its properties and those of its neighboring dark matter halos?

A hand-designed estimator for the functional form of

for halo might correlate with the mass of the same halo, , as well as the mass within 20 distance units (we decide to use 20 as the smoothing radius): . The intuition behind this scaling is described in [44]. Can we find a better equation that we can fit better to the data, using our methodology?

Data, training and symbolic regression.

We study this problem with the open sourced N-body dark matter simulations from

[40]. We choose the zeroth simulation in this dataset, at the final time step (current day Universe), which contains 215,854 dark matter halos. Each halo has mass , position , and velocity . We also compute the smoothed overdensity at the location of the center of each halo. We convert this set of halos into a graph by connecting halos within fifty distance units (each distance unit is approximately 3 million light years long) of each other. This results in 30,437,218 directional edges between halos, or 71 neighbors per halo on average. We then attempt to predict

for each halo with a GN. Training details are the same as for the Newtonian simulations, but we switch 500 hidden units after hyperparameter tuning based on GN accuracy.

The GN trained with L appears to have messages containing only 1 informative feature, so we extract message samples for this component of the messages over the training set for random pairs of halos, and node function samples for random receiving halos and their summed messages. The formula extracted by the algorithm is given in table 2 as “Best, with mass”. The form of the formula is new and it captures a well-known relationship between halo mass and environment: bias-mass relationship. We refit the parameters in the formula on the original training data to avoid accumulated approximation error from the multiple levels of function fitting. We achieve a loss of 0.0882 where the hand-designed formula achieves a loss of 0.121. It is quite surprising that our formula extracted by our approach is able to achieve a better fit than the formula hand-designed by scientists.

Test Formula Summed Component

Old

Constant N/A 0.421
Simple 0.121

New

Best, without mass 0.120
Best, with mass 0.0882
Table 2: A comparison of both known and discovered formulas for dark matter overdensity. indicates fitted parameters, which are given in the appendix.

The formula makes physical sense. Halos closer to the dark matter halo of interest should influence its properties more, and thus the summed function scales inversely with distance. Similar to the hand-designed formula, the overdensity should scale with the total matter density nearby, and we see this in that we are summing over mass of neighbors. The other differences are very interesting, and less clear; we plan to do detailed interpretation of these results in a future astrophysics study.

As a followup, we also calculated if we could predict the halo overdensity from only velocity and position information. This is useful because the most direct observational information available is in terms of halo velocities. We perform an identical analysis without mass information, and find a curiously similar formula. The relative speed between two neighbors can be seen as a proxy for mass, which is seen in table 2. This makes sense as a more massive object will have more gravity, accelerating falling particles near it to faster speeds. This formula is also new to cosmologists, and can in principle help push forward cosmological analysis.

Symbolic generalization.

As we know that our physical world is well described by mathematics, we can use it as a powerful prior for creating new models of our world. Therefore, if we distill a neural network into a simple algebra, will the algebra generalize better to unseen data? Neural nets excel at learning in high-dimensional spaces, so perhaps, by combining both of these types of models, one can leverage the unique advantages of each. Such an idea is discussed in detail in [45].

Here we study this on the cosmology example by masking 20% of the data: halos which have . We then proceed through the same training procedure as before, learning a GN to predict with L regularization, and then extracting messages for examples in the training set. Remarkably, we obtain a functionally-identical expression when extracting the formula from the graph network on this subset of the data: . We fit these constants to the same masked portion of data on which the graph network was trained. The graph network itself obtains an average error of 0.0634 on the training set, and 0.142 on the out-of-distribution data. Meanwhile, the symbolic expression achieves 0.0811 on the training set, but 0.0892 on the out-of-distribution data. Therefore, for this problem, it seems a symbolic expression generalizes much better than the very graph neural network it was extracted from. This alludes back to Eugene Wigner’s article: the language of simple, symbolic models is remarkably effective in describing the universe.

5 Conclusion

We have demonstrated a general approach for imposing physically motivated inductive biases on GNs and Hamiltonian GNs to learn interpretable representations, and potentially improved zero-shot generalization. Through experiment, we have shown that GN models which implement a bottleneck or L

regularization in the message passing layer, or a Hamiltonian GN flattened to pairwise and self-terms, can learn message representations equivalent to linear transformations of the true force vector or energy. We have also demonstrated a generic technique for finding an unknown force law or energy from these models: symbolic regression is capable of fitting explicit equations to our trained model’s message function. We introduced the Flattened Hamiltonian Graph Network, which allowed us to learn symbolic forms of pairwise interaction energies. Because GNs have more explicit substructure than their more homogeneous deep learning relatives (e.g., plain MLPs, convolutional networks), we can draw more fine-grained interpretations of their learned representations and computations. Finally, we have demonstrated our algorithm on a non-trivial dataset, and discovered a new law for cosmological dark matter.

Acknowledgment: Miles would like to thank Christina Kreisch and Francisco Villaescusa-Navarro for assistance with the Cosmology dataset; and Christina Kreisch, Oliver Philcox, and Edgar Minasyan for comments on a draft of the paper.

Our code made use of the following Python packages: numpy, scipy, sklearn, jupyter, matplotlib, pandas, torch, tensorflow, jax, and torch_geometric [46, 47, 48, 49, 50, 51, 32, 52, 53, 33].

References

Appendix A Model Implementation Details

Code for our implementation can be found at https://github.com/MilesCranmer/symbolic_deep_learning. Here we describe how one can implement our model from scratch in a deep learning framework. The main argument in this paper is that one can apply strong inductive biases to a deep learning model to simplify the extraction of a symbolic representation of the learned model. While we emphasize that this idea is general, in this section we focus on the specific Graph Neural Networks we have used as an example throughout the paper.

a.1 Basic Graph Representation

We would like to use the graph to predict an updated graph . Our input dataset is a graph consisting of nodes with features each: , with each . The nodes are connected by edges: , where are the indices for the receiving and sending nodes, respectively. We would like to use this graph to predict another graph , where each is the node corresponding to . The number of features in these predicted nodes, , need not necessarily be the same as for the input nodes (), though this could be the case for dynamical models where one is predicting updated states of particles. For more general regression problems, the number of output features is arbitrary.

Edge model.

The prediction is done in two parts. We create the first neural network, the edge model (or “message function”), to compute messages from one node to another: . Here, is the number of message features. In the bottleneck model, one sets equal to the known dimension of the force, which is or for us. In our models, we set for the standard and L models, and for the KL model (which is described separately later on). We create as a multi-layer perceptron with ReLU activations and two hidden layers, each with hidden nodes. The mapping is for all edges indexed by (i.e., we concatenate the receiving and sending node features).

Aggregation.

These messages are then pooled via element-wise summation for each receiving node into the summed message, . This can be written as .

Node model.

We create a second neural network to predict the output nodes, , for each from the corresponding summed message and input node. This net can be written as , and has the mapping: , where is the prediction for . We also create as a multi-layer perceptron with ReLU activations and two hidden layers, each with

hidden nodes. This model is then trained with the loss function as described later in this section.

Summary.

We can write out our forward model for the bottleneck, standard, and L models as:

Loss.

We jointly optimize the parameters in and via mini-batch gradient descent with Adam as the optimizer. Our total loss function for optimizing is:

where is the true value for the predicted node . is the -th network parameter out of total parameters. This implementation can be visualized during training in the video https://github.com/MilesCranmer/symbolic_deep_learning. During training, we also apply a random translation augmentation to all the particle positions to artificially generate more training data.

Next, we describe the KL variant of this model. Note that for the cosmology example in section 4.3, we use the L model described above with hidden nodes (found with coarse hyperparameter tuning to optimize accuracy) instead of , but other parameters are set the same.

a.2 KL Model

The KL model is a variational version of the GN implementation above, which models the messages as distributions. We choose a normal distribution for each message component with a prior of

. More specifically, the output of should now map to twice as many features as it is predicting a mean and variance, hence we set . The first half of the outputs of now represent the means, and the second half of the outputs represent the log variance of a particular message component. In other words,

where

is a multinomial Gaussian distribution. Every time the graph network is run, we calculate the mean and log variance of messages, sample each message once to calculate

, and pass those samples through a sum to compute a sample of and then pass that value through the edge function to compute a sample of . The loss is calculated normally, except for , which becomes the KL divergence with respect to our Gaussian prior of :

with (equivalent to for the loss of a

-Variational Autoencoder; simply the standard VAE). The KL-divergence loss also encourages sparsity in the messages

similar to the L loss. The difference is that here, an uninformative message component will have (a KL of 0) rather than a small absolute value. We train the networks with a decaying learning schedule as given in the example code.

a.3 Constraining Information in the Messages

The hypothesis which motivated our graph network inductive bias is that if one minimizes the dimension of the vector space used by messages in a GN, the components of message vectors will learn to be linear combinations of the true forces (or equivalent underlying summed function) for the system being learned. The key observation is that could learn to correspond to the true force vector imposed on the -th body due to its interaction with the -th body.

Here, we sketch a rough mathematical explanation of our hypothesis that we will reconstruct the true force in the graph network given our inductive biases. Newtonian mechanics prescribes that force vectors, , can be summed to produce a net force, , which can then be used to update the dynamics of a body. Our model uses the -th body’s pooled messages, to update the body’s state via . If we assume our GN is trained to predict accelerations perfectly for any number of bodies, this means (ignoring mass) that . Since this is true for any number of bodies, we also have the result for a single interaction: . Thus, we can substitute this expression into the multi-interaction case: . From this relation, we see that has to be a linear transformation conditioned on . Therefore, for cases where is invertible in (which becomes true when is the same dimension as the output of ), we can write , which is also a linear transform, meaning that the message vectors are linear transformations of the true forces when is equal to the dimension of the forces.

If the dimension of the force vectors (or what the minimum dimension of the message vectors “should” be) is unknown, one can encourage the messages to be sparse by applying L or Kullback-Leibler regularizations to the messages in the GN. The aim is for the messages to learn the minimal vector space required for the computation automatically. This is a more mathematical explanation of why the message features are linear combinations of the force vectors, when our inductive bias of a bottleneck or sparse regularization is applied. We emphasize that this is a new contribution: never before has previous work explicitly identified the forces in a graph network.

General Graph Neural Networks.

In all of our models here, we assume the dataset does not have edge-specific features, such as a different coupling constants between different particles, but these could be added by concatenating edge features to the receiving and sending node input to . We also assume there are no global properties. The graph neural network is described in general form in Battaglia et al. [2018]. All of our techniques are applicable to the general form: one would approximate with a symbolic model with included input edge parameters, and also fit the global model, denoted .

a.4 Flattened Hamiltonian Graph Network.

As part of this study, we also consider an alternate dynamical model that is described by a linear latent space other than force vectors. In the Hamiltonian formalism of classical mechanics, energies of pairwise interactions and kinetic and potential energies of particles are pooled into a global energy value, , which is a scalar. We label pairwise interaction energy and the energy of individual particles as . Thus, using our previous graph notation, we can write the total energy of a system as:

(1)

For particles interacting via gravity, this would be

(2)

where indicates the momentum, mass, and position of particle , respectively, and we have set the gravitational constant to . Following Greydanus et al. [2019], Sanchez-Gonzalez et al. [2019], we could model as a neural network, and apply Hamilton’s equations to create a dynamical model. More specifically, as in Sanchez-Gonzalez et al. [2019], we can predict as the global property of a GN (this is called a Hamiltonian Graph Network or HGN). However, energy, like forces in Cartesian coordinates, is a summed quantity. In other words, energy is another “linear latent space” that describes the dynamics.

Therefore, we argue that an HGN will be more interpretable if we explicitly sum up energies over the system, rather than compute as a global property of a GN. Here, we introduce the “Flattened Hamiltonian Graph Network,” or “FlatHGN”, which uses eq. 1 to construct a model that works on a graph. We set up two Multi-Layer Perceptrons (MLPs), one for each node:

(3)

and one for each edge:

(4)

Note that the derivatives of now propagate through the pool, e.g.,

(5)

This model is similar to the Lagrangian Graph Network proposed in Cranmer et al. [2020]. Now, should this FlatHGN learn energy functions such that we can successfully model the dynamics of the system with Hamilton’s equations, we would expect that and should be analytically similar to parts of the true Hamiltonian. Since we have broken the traditional HGN into a FlatHGN, we now have pairwise and self energies, rather than a single global energy, and these are simpler to extract and interpret. This is a similar inductive bias to the GN we introduced previously. To train a FlatHGN, one can follow our strategy above, with the output predictions made using Hamilton’s equations applied to our . One difference is that we also regularize , since it is degenerate with in that it can pick up self energy terms.

Appendix B Simulations

Our simulations for sections 4.2 and 4.1 were written using the JAX library (https://github.com/google/jax) so that we could easily vectorize computations over the entire dataset of 10,000 simulations. Example “long exposures” for each simulation in 2D are shown in fig. 4.

Figure 4: Examples of a selection of simulations, for 4 nodes and two dimensions. Decreasing transparency shows increasing time, and size of points shows mass.

To create each simulation, we set up the following potentials between two particles, (receiving) and (sending). Here, is the distance between two particles plus to prevent singularities. For particle , is the mass, is the charge, is the number of particles in the simulation, is the position of a particle, and is the velocity of a particle.

All variables lack units. Here,

is sampled from a log-normal distribution with

. Each component of and is randomly sampled from a normal distribution with . is randomly drawn from a set of two elements: , representing charge. The acceleration of a given particle is then

(6)

This is integrated over 1000 time steps of a fixed step size for a given random initial configuration using an adaptive RK4 integrator. The step size varies for each simulation due to the differences in scale. It is: 0.005 for , 0.001 for , 0.01 for Spring, 0.02 for Damped, 0.001 for Charge, and 0.01 for Discontinuous. Each simulation is performed in two and three dimensions, for 4 and 8 bodies. We store these simulations on disk. For training, the simulations for the particular problem being studied are loaded, and each instantaneous snapshot of each simulation is converted to a fully connected graph, with the predicted property (nodes of , see appendix A) being the acceleration of the particles at that snapshot.

The test loss of each model trained on each simulation set is given in table 3.

Sim. Standard Bottleneck L KL FlatHGN
Charge-2 49 50 52 60 55
Charge-3 1.2 0.99 0.94 4.2 3.5
Damped-2 0.30 0.33 0.30 1.5 0.35
Damped-3 0.41 0.45 0.40 3.3 0.47
Disc.-2 0.064 0.074 0.044 1.8 0.075
Disc.-3 0.20 0.18 0.13 4.2 0.14
-2 0.077 0.069 0.079 3.5 0.05
-3 0.051 0.050 0.055 3.5 0.017
-2 1.6 1.6 1.2 9.3 1.3
-3 4.0 3.6 3.4 9.8 2.5
Spring-2 0.047 0.046 0.045 1.7 0.016
Spring-3 0.11 0.11 0.090 3.8 0.010
Table 3:

Test prediction losses for each model on each dataset in two and three dimensions. The training was done with the same batch size, schedule, and number of epochs.

As described in the text (and visualized in the drive video), we can fit linear combinations of the true force components to each of the significant features of a message vector. This fit is summarized by table 1, and the fit itself is visualized in fig. 5 for various models on the 2D spring simulation.

Figure 5: The most significant message components of each model compared with a linear combination of the force components: this example, the spring simulation in 2D with eight nodes for training. These plots demonstrate that the GN’s messages have learned to be linear transformations of the vector components of the true force, in this case a springlike force, after applying an inductive bias to the messages.

Appendix C Symbolic Regression Details

After training a model on each simulation, we convert a deep learning model to a symbolic expression by approximating sub-components of the model with symbolic regression, over observed inputs and outputs. For our aforementioned GNN implementation, we can record the outputs of and for various data points in the training set.

For models other than the bottleneck and Hamiltonian model (where we explicitly limit the features) we calculate the most significant output features of (we also refer to the output features as “message components”). For the L

and standard model, this is done by sorting the message components with the largest standard deviation; the most significant feature is the one with the largest standard deviation, which are the features we study. For the KL model, we consider the feature with the largest KL-divergence:

. These features are the ones we consider to be containing information used by the GN, so are the ones we fit symbolic expressions to.

As an example, here we fit the most significant feature, which we refer to as , over random examples of the training dataset. We do this for the particle simulations in section 4.1. The inputs to the actual neural network are: (mass, charge, and Cartesian positions of receiving and sending node), leaving us with many examples of . We would like to fit a symbolic expression to map . To simplify things for this symbolic model, we convert the input position variables to a more interpretable format: for displacement, likewise for (and , if it is a 3D simulation), and for distance.

We then pass these examples (we take 5000 examples for each of our tests) to eureqa, and ask it to fit as a function of the others by minimizing the mean absolute error (MAE). We allow it to use the operators ^ as well as real constants in its solutions. We score complexity by counting the number of occurrences of each operator, constant, and input variable. We weight ^ as three times the other operators, since these are more complex operations. eureqa outputs the best equation at each complexity level, denoted by . Example outputs are shown in table 4 for the and simulations. We select a formula from this list by taking the one that maximizes the fractional drop in mean absolute error (MAE) over an increase in complexity from the next best model. This is analogous to Occam’s Razor: we jointly optimize for simplicity and accuracy of the model. The objective itself can be written as maximizing over the best model at each maximum complexity level, and is schematically illustrated in fig. 6. We find experimentally that this score produces the best-recovered solutions in a variety of tests on different generating equations.

Following this process, we fit a single analytic expression to model as a function of the simplified input variables. We recover many analytical expressions that were used to generate the data, examples of which are listed below ( indicate learned constants):

  • Spring, 2D, L (expect ).

  • , 3D, Bottleneck (expect ).

  • Discontinuous, 2D, L (expect ).

Solutions extracted for the 2D Simulation MAE Complexity
17.954713 22
18.400224 20
42.323236 18
69.447467 16
131.42547 14
160.31243 12
913.83751 8
1520.9493 6
1551.3437 5
1558.9756 3
1570.0905 1
Solutions extracted for the 2D Simulation MAE Complexity
0.37839388 22
0.38 20
0.42 18
0.46575519 16
2.48 14
6.96 12
7.93 10
31.17 8
68.345174 6
85.743106 5
93.052677 3
96.708906 2
103.29053 1
Table 4: Results of using symbolic regression to fit equations to the most significant (see text) feature of , denoted , for the (top) and (bottom) force laws, extracted from the bottleneck model. We expect to see , for arbitrary and , and for the simulation and for the simulation, which is approximately what we recover. The row with a gray background has the largest fractional drop in mean absolute error in their tables, which according to our parametrization of Occam’s razor, represents the best model. This demonstrates a technique for learning an unknown “force law” with a constrained graph neural network.
Figure 6: A plot of the data for the simulation in table 4, indicating mean absolute error versus complexity in the top plot and fractional drop in mean absolute error over the next-best model in the bottom plot. As indicated, we take the largest drop in log-loss over a single increase in complexity as the chosen model—it is our parametrization of Occam’s Razor.

Examples of failed reconstructions.

Note that reconstruction does not always succeed, especially for training strategies other than L or bottleneck models that cannot successfully find compact representations of the right dimensionality. We demonstrate some failed examples below:

  • Spring, 3D, KL (expect ).

  • , 3D, Standard (expect ).

We do not attempt to make any general statements about when symbolic regression applied to the message components will fail or succeed in extracting the true law. Simply, we show that it is possible, for a variety of physical systems, and argue that reconstruction is more likely by the inclusion of a strong inductive bias in the network.

Discovering potentials using FlatHGN.

Lastly, we also show an example of a successful reconstruction of a pairwise Hamiltonian from data. We treat the just as we would , and fit it to data. The one difference here is that there are potential values offset by a constant function of the non-dynamical parameters (fixed properties like mass) which still produce the correct dynamics, since only the derivatives of are used. Thus, we cannot simply fit a linear transformation of the true to data to verify it has learned our generating equation: we must rely on symbolic regression to extract the full functional form. We follow the same procedure as before, and successfully extract the potential for a charge simulation:

where we expect , for constant and arbitrary function , which shows that the neural network has learned the correct form of the Hamiltonian.

Appendix D Video Demonstration and Code

We include a video demonstration of the central ideas of our paper at https://github.com/MilesCranmer/symbolic_deep_learning. It shows the message components of a graph network converging to be equal to a linear combination of the force components when L regularization is applied. Time in each clip of the video is correlated with training epoch. In this video, the top left corner of the fully revealed plot corresponds to a single test simulation that is 300 time steps long. Four particles of different masses are initiated with random positions and velocities, and evolved according to the potential of a spring with an equilibrium position of : , where is the distance between two particles. The evaluation trajectories are shown on the right, with the gray particles indicating the true locations. The 15 largest message components in terms of standard deviation over a test set are represented in a sorted list below the graph network in gray, where darker color corresponds to a larger standard deviation. Since we apply L regularization to the messages, we expect this list to grow sparser over time, which it does. Of these messages, the two largest components are extracted, and each is fit to a separate linear combination of the true force components (bottom left). A better fit to the true force components — indicating that the messages represent the force — are indicated by dots (each dot is a single message) that lie closer along the line in the bottom middle two scatter plots.

As can be seen in the video, as the messages grow increasingly sparse, the messages eventually converge to be almost exactly linear combinations of the true forces. Finally, once the loss is converged, we also fit symbolic regression to the largest message component. The video was created using the same training procedure as used in the rest of the paper. The dataset that the L model was trained on is the 4-node Spring-2. Finally, we include the full code required to generate the animated clips in the above figure. This code contains all of the models and simulators used in the paper, along with the default training parameters. This code can also be accessed in the drive.

Appendix E Cosmological Experiments

For the Cosmological data graph network, we do a coarse hyperparameter tuning based on predictions of and select a GN with 500 hidden units, two hidden layers per node function and message function. We choose 100 message dimensions as before. We keep other hyperparameters the same as before: L regularization with a regularization scale of .

Remarkably, the vector space discovered by this graph network is 1 dimensional. This is indicated by the fact that only one message component has standard deviation of about and all other 99 components have a standard deviation of under . This suggests that the prediction is a sum over some function of the center halo and each neighboring halo. Thus, we can rewrite our model as a sum over a function which takes the central halo and each neighboring halo, and passes it to which predicts given the central halo properties.

Best-fit parameters.

We list best-fit parameters for the discovered models in the paper in table 5. The functional forms were extracted from the GN by approximating both and over training data with a symbolic regression and then analytically composing the expressions. Although the symbolic regression fits constants itself, this accumulates error from the two levels of approximation (graph net to data, symbolic regression to graph net). Thus, we take out the functional forms as given in table 5, and refit the parameters directly to the training data. This results in the parameters given, which are used to calculate accuracy of the symbolic models.

Test Formula Summed Component

Old

Constant N/A 0.421
Simple 0.121

New

Best, without mass