AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity

by   Silviu-Marian Udrescu, et al.

We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal, in the sense of having the best accuracy for a given complexity. It improves on the previous state-of-the-art by typically being orders of magnitude more robust toward noise and bad data, and also by discovering many formulas that stumped previous methods. We develop a method for discovering generalized symmetries (arbitrary modularity in the computational graph of a formula) from gradient properties of a neural network fit. We use normalizing flows to generalize our symbolic regression method to probability distributions from which we only have samples, and employ statistical hypothesis testing to accelerate robust brute-force search.



There are no comments yet.


page 3

page 4

page 5

page 6

page 9

page 10

page 11

page 15


A goodness of fit test for the Pareto distribution

The Zenga (1984) inequality curve is constant in p for Type I Pareto dis...

AI Feynman: a Physics-Inspired Method for Symbolic Regression

A core challenge for both physics and artificial intellicence (AI) is sy...

Symbolic Regression Driven by Training Data and Prior Knowledge

In symbolic regression, the search for analytic models is typically driv...

Discovering Symbolic Models from Deep Learning with Inductive Biases

We develop a general approach to distill symbolic representations of a l...

Pareto models for risk management

The Pareto model is very popular in risk management, since simple analyt...

FunGrim: a symbolic library for special functions

We present the Mathematical Functions Grimoire (FunGrim), a website and ...

Network Models for Multiobjective Discrete Optimization

This paper provides a novel framework for solving multiobjective discret...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Our symbolic regression of data on how kinetic energy depends on mass, velocity and the speed of light discovers a Pareto-frontier of four formulas that are each the most accurate given their complexity. Convex corners reveal particularly useful formulas, in this case Einstein’s formula and the classical approximation .

A central challenge in science is symbolic regression: discovering a symbolic expression that provides a simple yet accurate fit to a given data set. More specifically, we are given a table of numbers, whose rows are of the form where , and our task is to discover the correct symbolic expression (composing mathematical functions from a user-provided set) for the unknown mystery function

, optionally including the complication of noise and outliers. Science aside, symbolic regression has the potential to replace some inscrutable black-box neural networks by simple yet accurate symbolic approximations, helping with the timely goal of making high-impact AI systems more interpretable and reliable

russell2015research ; amodei2016concrete ; boden2017principles ; krakovna2016increasing ; russell2019human .

Symbolic regression is difficult because of the exponentially large combinatorial space of symbolic expressions. Traditionally, symbolic regression has relied on human intuition, leading to the discovery of some of the most famous formulas in science. More recently, there has been great progress toward fully automating the process crutchfield1987equation ; dzeroski1995discovering ; bradley2001reasoning ; langley2003robust ; schmidt2009distilling ; mcree2010symbolic ; searson2010gptips ; dubvcakova2011eureqa ; stijven2011separating ; schmidt2011automated ; hillar2012comment ; daniels2015automated ; langley2015heuristic ; arnaldo2015building ; brunton2016discovering ; guzdial2017game ; quade2018sparse ; koch2018mutual ; kong2018new ; udrescu2020ai ; liang2019phillips

, and open-source software now exists that can discover quite complex physics equations by combining neural networks with techniques inspired by physics and information theory

udrescu2020ai . The goal of this paper is to further improve this state-of-the-art, by making three main contributions:

  1. We use learned neural-network gradients to discover and exploit modularity in the function’s computational graph.

  2. We use statistical hypothesis testing and recursive composition of description-length-based Pareto-frontiers to accelerate and robustify the symbolic regression.

  3. We use normalizing flows to enable regression of probability distributions from samples.

We describe our symbolic regression algorithm in Section 2 and test it with numerical experiments in Section 3.

2 Method

Our symbolic regression algorithm uses a divide-and-conquer approach as in udrescu2020ai . We directly solve a mystery in two base cases: if the mystery function is a low-order polynomial or if it is simple enough to be discovered by brute-force search. Otherwise, we recursively try the strategies that we will now describe for replacing it by one or more simpler mysteries, ideally with fewer input variables.

2.1 Leveraging graph modularity against the curse of dimensionality

Figure 2: All functions can be represented as tree graphs whose nodes represent a set of basic functions (middle panel). Using a neural network trained to fit a mystery function (left panel), our algorithm seeks a decomposition of this function into others with fewer input variables (right panel), in this case of the form .

When we define and evaluate a mathematical function, we typically represent it as composed of some basis set of simpler functions. As illustrated in Figure 2 (middle panel), this representation can be specified as a graph whose nodes contain elements of . The most popular basis functions in the scientific literature tend to be functions of two variables (such as or ), one variable (such as or ) or no variables (constants such as or ). For many functions of scientific interest, this graph is modular in the sense that it can be partitioned in terms of functions with fewer input variables, as in Figure 2 (right panel).

A key strategy of our symbolic regression algorithm is to recursively discover such modularity, thereby reverse-engineering the computational graph of a mystery function, starting with no information about it other than an input-output data table. This is useful because there are exponentially many ways to combine

basis functions into a module, making it extremely slow and difficult for brute-force or genetic algorithms to discover the correct function when

is large. Our divide-and-conquer approach of first breaking the function into smaller modules with smaller that can be solved separately thus greatly accelerates the solution. We implement this modularity discovery algorithm in two steps:

  1. Use the user-provided data table to train a neural network that accurately approximates the mystery function .

  2. Perform numerical experiments on to discover graph modularity.

Figure 3: Examples of graph modularity that our algorithm can auto-discover. Lines denote real-valued variables and ovals denote functions, with larger ones being more complex.

Specifically, we test for the six types of graph modularity illustrated in Figure 3 and listed in Table 1, and choose between the discovered candidates as described in Section 2.2. Our method for discovering separability is described in udrescu2020ai . As we will see below, all our other types of graph modularity (compositionality, symmetry and generalized additivity) can be revealed by , the gradient of our mystery function .

Name Property Action
Negativity Solve for
Positivity Solve for
separability Solve for &
separability Solve for &
Simple symmetry Solve for
Compositionality Find with
Generalized Find satisfying
symmetry ,
Generalized Solve for , &
Zero-snap has numerical parameters p Replace by
Integer snap has numerical parameters p Round to integer
Rational snap has numerical parameters p Round to fraction
Reoptimize has numerical parameters p Reoptimize p to
minimize inaccuracy
Table 1: Simplification strategies


Let us first consider the case of compositionality (Figure 3, top right), where and is a scalar function simpler than in the sense of being expressible with a smaller graph as in Figure 3

. By the chain rule, we have


where hats denote unit vectors:

, etc. This means that if we can discover a function whose gradient is proportional to that of (which we will describe a process for in Section 2.2), then we can simply replace the variables x in the original mystery data table by the single variable and recursively apply our AI Feynman algorithm to the new one-dimensional symbolic regression problem of discovering .

Generalized symmetry

Let us now turn to generalized symmetry (Figure 3, bottom left), where of the arguments enter only via some scalar function of them. Specifically, we say that an has generalized symmetry if the components of the vector can be split into groups of and components (which we denote by the vectors and ) such that for some function . By the chain rule, we have


where denotes the derivative of with respect to its first argument. This means that is independent of , which it would not be for a generic function . -independence of the normalized gradients thus provides a smoking gun signature of generalized symmetry. Whereas our compositionality discovery above requires discovering an explicit function , we can discover generalized symmetry without knowing , thus only performing the time-consuming task of searching for an satisfying equation (2) after determining that a solution exists. The Supplementary Material details how we numerically test for -independence of .

Generalized additivity

If is a function of two variables, then we also test for generalized additivity (Figure 3, bottom right), where . If we define the function


if satisfies the generalized additivity property. In other words, we simply need to test if is of the multiplicatively separable form , and we do this using a variant of the separability test described in udrescu2020ai . The Supplementary Material details how we perform this separability test numerically.

2.2 Robustness through recursive Pareto-optimality

As illustrated in Figure 1, the goal of our symbolic regression of a data set is to approximate by functions that are not only accurate, but also simple, in the spirit of Occam’s razor. As in schmidt2009distilling , we seek functions that are Pareto-optimal in the sense of there being no other function that is both simpler and more accurate. We will adopt an information-theoretical approach and use bits of information to measure lack of both accuracy and simplicity.

For accuracy, we wish the vector of prediction errors to be small. We quantify this not by the mean-squared error or max-error as in schmidt2009distilling ; udrescu2020ai , but by the MEDL, the mean error-description-length defined in Table 2. As argued in wu2019toward and illustrated in Figure 4, this improves robustness to outliers. We analogously quantify complexity by the description length defined as in wu2019toward , summarized in Table 2.

can be viewed as a crude but computationally convenient approximation of the number of bits needed to describe each object, made differentiable where possible. We choose the precision floor . For function complexity, both input variables and mathematical functions (e.g., and +) count toward and . For example, the classical kinetic energy formula has bits, since the formula contains basis functions (, , and ) used times.

Object Symbol Description length
Natural number
Rational number
Real number
Parameter vector p
Parametrized function ; basis functions appear k times
Table 2: Complexity definitions

We wish to make the symbolic regression implementation of udrescu2020ai more robust; it sometimes fails to discover the correct expression because of noise in the data or inaccuracies introduced by the neural network fitting. The neural network accuracy may vary strongly with x

, becoming quite poor in domains with little training data or when the network is forced to extrapolate rather than interpolate, and we desire a regression method robust to such outliers. We expect our insistence on Pareto-optimal functions in the information plane of Figure 

1 to increase robustness, both because is robust (Figure 4) and because noise and systematic errors are unlikely to be predictable by a simple mathematical formula with small . More broadly, minimization of total exact description length (which

crudely approximates) provably avoids the overfitting problem that plagues many alternative machine-learning strategies

rissanen1978modeling ; hutter2000theory ; grunwald2005advances .

Figure 4: When fitting a function (the right panel shows the example ) to data with outliers, minimizing mean-squared-error (MSE) biases the curve toward the outliers (here finding , ), whereas minimizing mean error description length (MEDL) ignores the outliers and recovers the correct answer

. Left panel compares MSE and MEDL loss functions for


Speedup by recursive Pareto frontier composition

When recursively symbolically regressing various modules (see Figure 2), we end up with a Pareto frontier of candidate functions for each one. If there are functions on the frontier, then combining them all would produce candidates for the original function . We speed up our algorithm by Pareto-pruning after each merge step: whenever two modules are combined (via composition or multiplication, say), the resulting functions are pruned by removing all functions that are Pareto-dominated by another function that is both simpler and more accurate. Pruning models on the Pareto frontier significantly reduces the number of models that need to be evaluated, since in typical scenarios, the number of Pareto-optimal points grows only logarithmically with the total number of points.

Robust speedup of brute-force graph search with hypothesis testing

Our recursive reduction of regression mysteries into simpler ones terminates at the base case when the mystery function has only one variable and cannot be further modularized. As in udrescu2020ai , we subject these (and also all multivariate modules) to two solution strategies, polynomial fitting up to some degree ( by default) and brute force search, and then add all candidates functions to the Pareto plane and prune as above. The brute-force search would, if run forever, try all symbolic expressions by looping over ever-more-complex graphs (the middle panel of Figure 2 shows an example) and over function options for each node.

Our brute-force computation of the Pareto frontier simply tries all functions () in order of increasing complexity and keeps only those with lower mean error-description-length than the previous record holder, where . When instead fitting normalized gradient vectors as in Section 2.1, we define to handle the sign ambiguity. The bad news is that computing exactly is slow, requiring evaluation of for all data points . The good news is that this is usually unnecessary, since for the vast majority of all candidate functions, it becomes obvious that they provide a poor fit after trying merely a handful of data points. We therefore accelerate the search via the following procedure. Before starting the loop over candidate functions, we sort the data points in random order to be able to interpret the numbers as random samples from a probability distribution whose mean is the sought-for

and whose standard deviation is

. Let and denote the corresponding quantities that were computed for the previous best-fit function we added to the Pareto frontier. We make the simplifying approximations that

and that all errors are uncorrelated, so that the loss estimate from the first

data points has mean and standard deviation . We now test our candidate function on one data point at a time and reject it as soon as



is a hyperparameter that we can interpret as the “number of sigmas" we require to rule out a candidate function as viable when its average error exceeds the previous record holder. We find that

usually works well, generically requiring no more than a handful of evaluations per candidate function asymptotically. We can further increase robustness by increasing at the price of longer runtime.

Speedup by greedy search of simplification options

We do not a priori know which of the modular decompositions from Figure 3 are most promising, and recursively trying all combinations of them would involve trying exponentially many options. We therefore accelerate our algorithm with a greedy strategy where at each step we compare the decomposition in a unified way and try only the most accurate one — our runtime thus grows roughly linearly with , the number of input variables. stays constant along constant- curves for generalized symmetry, simple symmetry (where , , or ) and generalized additivity (where ). We thus test the accuracy of all such -candidates by starting at a datapoint and computing an error for some satisfying . For additive and multiplicative separability, we follow udrescu2020ai by examining a rectangle in parameter space and predicting at the fourth corner from the other three, defining as the mismatch. The supplementary material details how our test points are chosen.

After this greedy recursive process has terminated, we further improve the Pareto frontier in two ways. We first add models where rational numbers are replaced by reals and optimized by gradient descent to fit the data. We then add models with zero-snap, integer-snap and rational-snap from Table 1 applied to all real-valued parameters as described in wu2019toward , pruning all Pareto-dominated models after each step. For example, if there are 3 real-valued parameters, integer-snap generates 3 new models where the 1, 2 and 3 parameters closest to integers get rounded, respectively.

2.3 Leveraging normalizing flows to symbolic regress probability distributions

An important but more difficult symbolic regression problem is when the unknown function is a probability distribution from which we have random samples rather than direct evaluations . We tackle this by adding preceding the regression by a step that estimates . For this step, we use the popular normalizing flow technique rezende15 ; dinh2017 ; kingma2018 ; durkan2019 ; kobyzev2020 , training an invertible neural network mapping such that

has a multivariate normal distribution

as illustrated in Figure 5. We then obtain our estimator , where is the Jacobian of .

We find rational-quadratic neural spline flows (RQ-NSF) suitable for relatively low-dimensional applications due to their enhanced expressivity. Specifically, we used three steps of the RQ-NSF with RQ-NSF (C) coupling layers as described in durkan2019

, parametrized by three 16-neuron softplus layers, trained for

epochs with the Adam optimizer. The learning rate was initialized to and halved every time the test loss failed to improve for epochs.

Figure 5: A normalizing flow maps samples from a probability distribution (right) into a normal distribution (left), enabling an estimate (middle) of , here illustrated for the , , hydrogen orbital from Table 5.

3 Results

We now turn to quantifying the performance of our method with numerical experiments, comparing it with that of udrescu2020ai which recently exceded the previous state of-the-art performance of schmidt2009distilling . To quantify robustness to noise, we add Gaussian noise of standard deviation to and determine the largest integer for which the method successfully discovers the correct mystery function . As seen in Table 3, our method solves 73 of 100 the baseline problems from the Feynman Symbolic Regression Database udrescu2020ai with , and is typically 1-3 orders of magnitude more robust than that of udrescu2020ai . Crudely speaking, we found that adding progressively more noise shifted the most accurate formula straight upward in the Pareto plane (Figure 1) until it no longer provided any accuracy gains compared with simpler approximations.

Table 3: Robustness to noise

To quantify the ability of our method to discover more complex equations, we reran it on all 17 mysteries that udrescu2020ai tackled and failed to solve. We also tested a dozen new mysteries exhibiting various forms of graph modularity (see Table 4) that were all chosen before any of them were tested. Allowing at most two hours of run-time, the method of udrescu2020ai solved equations 5, 12,…,15, whereas our new method solved them all, as well as four of the outstanding mysteries from udrescu2020ai (rows 1-4). For these first four, our method got the numerical parameters in the right ballpark with rational approximations, then discovered their exact values through gradient descent.

Equation Symmetries
1 TC
2 TS
4 P
5  (Parallel resistors) PGSM
6  (RLC circuit ) MG
7  (RLC circuit) MG
8  (Wheatstone bridge) SGMA
9  (Velocity addition) AG
10  (Velocity addition) GA
11  (-norm) AC
12 GA
13 A
14 A
15 A
16  (Incircle) GMAC
Table 4: Test equations exhibiting translational symmetry (T), scaling symmetry (S), product symmetry (P), generalized symmetry (G), multiplicative separability (M), compositionality (C) and generalized additivity (A).

To quantify the ability of our method to discover probability distributions, we tested it on samples from the ten distributions in Table 5. As seen in the table, 80% were solved, requiring between and samples . The flows trained in about 20 minutes on one CPU, scaling roughly linearly with sample size and number of network weights. The Supplementary Material details failure modes.

Distribution Name Probability distribution
Laplace distribution
Beta distribution (, )
Beta distribution (, )
Harmonic oscillator (, )
Sinc diffraction pattern
2D normal distribution (correlated)
2D harmonic oscillator (, , )
Hydrogen orbital (, , )
Hydrogen orbital (, , ) -
Hydrogen orbital (, , ) -
Table 5: Probability distributions and number of samples required to discover them

4 Conclusions

We have presented a symbolic regression method that exploits neural networks, graph modularity, hypothesis testing and normalizing flows. It improves state-of-the-art performance both by being more robust towards noise and by solving harder problems, including symbolic density estimation. These core ideas can enable further improvements. For example, gradients can reveal more types of graph modularity than the Figure 3

examples that we exploited; additional simplification strategies can be included in the Pareto-optimal recursion; and flow-based regression can be used for regularized density estimation from sparse high-dimensional data. Larger and more challenging collections of science-based equations are needed to benchmark and inspire improved algorithms.

Pareto-optimal symbolic regression has the power to not only discover exact formulas, but also approximate ones that are useful for being both accurate and simple. The mainstream view is that all known science formulas are such approximations. We live in a golden age of research with ever-larger datasets produced by both experiments and numerical computations, and we look forward to a future when symbolic regression is as ubiquitous as linear regression is today, helping us better understand the relations hidden in these datasets.

The authors with to thank Philip Tegmark for helpful comments, and the Center for Brains, Minds, and Machines (CBMM) for hospitality.

Funding: This work was supported by The Casey and Family Foundation, the Ethics and Governance of AI Fund, the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science and the Templeton World Charity Foundation, Inc.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: Data and code will be publicly released when this paper is accepted for publication.

Broader Impact

Who may benefit from this research

Our research presumably has quite broad impact, since discovery of mathematical patterns in data is a central problem across the natural and social sciences. Given the ubiquity of linear regression in research, one might expect that there will significant benefits to a broad range of researchers also from more general symbolic regression once freely available algorithms get sufficiently good.

Who may be put at disadvantage from this research

Although it is possible that some numerical modelers could get their jobs automated away by symbolic regression, we suspect that the main effect of our method, and future tools building on it, will instead be that these people will simply discover better models than today.

Risk of bias, failure and other negative outcomes

Pareto-optimal symbolic regression can be viewed as an extreme form of lossy data compression that uncovers the simplest possible model for any given accuracy. To the extent that overfitting can exacerbate bias, such model compression is expected to help. Moreover, since our method produces closed-form mathematical formulas that have excellent interpretability compared to black-box neural networks, they make it easier for humans to interpret the computation and pass judgement on whether it embodies unacceptable bias. This interpretability also reduces failure risk.

Another risk is automation bias, whereby people overly trust a formula from symbolic regression when they extrapolate it into an untested domain. This could be exacerbated if symbolic regression promotes scientific laziness and enfeeblement, where researchers fit phenomenological models instead of doing the work of building models based on first principles. Symbolic regression should inform but not replace traditional scientific discovery.

Although the choice of basis functions biases the discoverable function class, our method is agnostic to basis functions as long as they are mostly differentiable.

The potentially greatest risk associated with this work does not stem from it failing but from it succeeding: accelerated progress in symbolic regression, modularity discovery and its parent discipline program synthesis could hasten the arrival of artificial general intelligence, which some authors have argued that humanity still lacks the tools to manage safely russell2019human

. On the other hand, our work may help accelerate research on intelligible intelligence more broadly, and powerful future artificial intelligence is probably safer if we understand aspects of how it works than if it is an inscrutable black box.

Supplementary material

Below we provide additional technical details about how we implement our method and numerical experiments.

Appendix A Testing for generalized symmetry

We showed that generalized symmetry can be revealed by being independent of . We will now describe how we test for such -independence numerically. Given a point from our data set, we compute a set of normalized gradients , where correspond to a sample of other data points, and quantify the variation between then by the quantity


We can intuitively interpret the optimal as maximally aligned with the vectors up to a sign. Equation (5) implies that our variation measure

is simply one minus the smallest eigenvalue of

V, so ranges from when all are identical to when all eigenvalues are equal (equal to , since ). As illustrated in Figure 6, we compute for each subset of up to input variables, and select the subset with the smallest median as the most promising generalized symmetry candidate. In our numerical experiments, we set the hyperparameter to save time, since we do not wish to consider all subsets for large .

Figure 6: Distribution of for the function from Figure 2, revealing that evidence for the generalized symmetry (shaded distribution) is stronger than for (blue curve) or (red curve). The curves are shown slightly smoothed for clarity.

Appendix B Testing for generalized additivity

We showed that generalized additivity holds when the function from Equation (3) is multiplicatively separable. We will now describe how we test for such separability numerically. If being multiplicative separable is equivalent to being additively separable. We numerically quantity this by the and we test the function for additive separability using the normalized score defining


It is easy to see that if is additively separable functions , and otherwise. If the median value of over all points in the dataset, we take thisas evidence for generalized additivity in the dataset and proceed as below. It is important to use smooth (not, e.g.

, ReLU) activation functions in the activation function for this derivative-based test to be useful.

If this property holds, then we recursively apply our algorithm to the two new 1-dimensional symbolic regression problems of discovering and . If this succeeds and we are able to discover the functions and by symbolically integrating our solutions and , then we have reduced the original problem to the same state as when we found compositionality above, now with . Just as in that case, we simply replace the variables x in the original mystery data table by the single variable and recursively apply our AI Feynman algorithm to the new 1-dimensional symbolic regression problem of discovering how depends on .

If we have determined that generalized additivity holds but the above aforementioned method for discovering fails, we make a second attempt by training a neural network of the modular form to fit the data. If this succeeds, we then recursively apply our AI Feynman algorithm to the three new 1-dimensional symbolic regression problems of discovering , and .

Appendix C Further details on success and failure modes

Our paper reported which symbolic regression problems our method succeeded and failed on. Here we add specifics on how these successes and failures occurred.

Success definition

Given a data set , we use 90% of the data to compute a Pareto-optimal set of candidate functions , then rank them based on their MEDL accuracy on the held-back 10% of the data. We count our method as successful only if the top-ranked function matches the true exactly, or, if the definition of involves irrational numerical parameters, if these parameters are recovered to better than relative accuracy.

We considered an equation solved even if the top solution was not in the exact form presented in our tables, but Mathematically equivalent. For example, our method predicted that Equation (12) in Table 4 was , which is mathematically equivalent within the domain of our provided data set, where .

For the problem of density estimation from samples, our goal was to obtain the correct normalized probability distributions. The candidate functions on the Pareto-frontier were therefore discarded unless they were non-negative and normalizable. The surviving candidates then normalized to integrate to unity by symbolic/numerical integration to obtain the appropriate normalization constant, and quality-ranked by the surprisal loss function

evaluated on the held-back test data.

Success examples

Tables 1 and 2 below show the highest noise level allowing allowing each of the 100 equations from the Feynman Database for Symbolic regression to be solved in the original paper analyzing it and in the present paper.

For many of the solved equations, the modularity discovery had to be used multiple times in order for the correct equation to be discovered, reflecting the power of the recursive algorithm. For example, for the quadruple velocity addition equation in Table 4, generalized symmetry was exploited twice. First, the code discovered that the first two velocities only enter in the combination , and these two variables were replaced by a new variable . The same method then discovered that and only enter in that same combination , and thus the initial 3 variables , and were replaced by a single variable . Now the remaining equation had only 2 variables left, and was solved by brute force. In principle, this recursive method can be used to discover relativistic addition of an arbitrary number of velocities, by reducing the number of variables by one at each step.

Failure examples

As mentioned in the main text, numerous equations remained unsolved, motivating further work. In some cases, the main reason for this failure was that the form of the equation did not allow our method to break it into small enough pieces. For example, for the equation

our algorithm is able to discover the multiplicative separability into terms including only and only . However, the remaining -term is too complicated to be solved in a reasonable amount of time by the brute force code, and none of the graph modularity methods apply because they only help for functions of more than one variable.

For other equations, our method fails but not irreparably. For example, for the function

our code is able to discover that and can be separated from the rest of the equation. However, given that we allow the brute force code to run for only a minute at each iteration, the expression is not discovered, mainly because we did not include as one of the functions used, so the brute force would have to write that as . By allowing the code to run for longer and experimenting with additional basis functions (such as ), it is likely that the code would solve this and several other mysteries that we reported as failures.

Fitting a normalizing flow for density estimation presents a ‘real-world’ test of the symbolic regression code, which needs to cope with flow errors. We demonstrated the robustness of the code to more challenging errors (i.e. not i.i.d. Gaussian); some of the failure modes are investigated and expanded on below.

It is worth noting that our definition of complexity is dependent on the chosen set of operations and does not always match our intuition. For example, in fitting the probability distribution

of election positions in the , , hydrogen orbital, solutions with -dependence are preferred over . This is because, up to additive and multiplcative prefactors, the two formulas differ by at most approximately over our parameter range, but given a set of operations that includes only denoted by "" and “" respectively in reverse Polish notation, (encoded as“") is simpler than (encoded as ). In the presence of the imprecisions introduced by the normalizing flow, we were unable to perform the density estimation a level at which the accuracy for the correct was preferred over the simpler alternative.

Furthermore, more interpretable approximations (e.g. Taylor expansions) are not always favored by our definition of complexity. For example, in Figure 1, the unfamiliar solution

intermediate to the more familiar and of can be understood as a fourth-order approximation about of the exact formula. Specifically, , and . The Taylor expansions themselves are not preferred for reasons of complexity.

Feynman eq. Equation Old Noise tolerance New Noise tolerance