1 Introduction
A central challenge in science is symbolic regression: discovering a symbolic expression that provides a simple yet accurate fit to a given data set. More specifically, we are given a table of numbers, whose rows are of the form where , and our task is to discover the correct symbolic expression (composing mathematical functions from a userprovided set) for the unknown mystery function
, optionally including the complication of noise and outliers. Science aside, symbolic regression has the potential to replace some inscrutable blackbox neural networks by simple yet accurate symbolic approximations, helping with the timely goal of making highimpact AI systems more interpretable and reliable
russell2015research ; amodei2016concrete ; boden2017principles ; krakovna2016increasing ; russell2019human .Symbolic regression is difficult because of the exponentially large combinatorial space of symbolic expressions. Traditionally, symbolic regression has relied on human intuition, leading to the discovery of some of the most famous formulas in science. More recently, there has been great progress toward fully automating the process crutchfield1987equation ; dzeroski1995discovering ; bradley2001reasoning ; langley2003robust ; schmidt2009distilling ; mcree2010symbolic ; searson2010gptips ; dubvcakova2011eureqa ; stijven2011separating ; schmidt2011automated ; hillar2012comment ; daniels2015automated ; langley2015heuristic ; arnaldo2015building ; brunton2016discovering ; guzdial2017game ; quade2018sparse ; koch2018mutual ; kong2018new ; udrescu2020ai ; liang2019phillips
, and opensource software now exists that can discover quite complex physics equations by combining neural networks with techniques inspired by physics and information theory
udrescu2020ai . The goal of this paper is to further improve this stateoftheart, by making three main contributions:
We use learned neuralnetwork gradients to discover and exploit modularity in the function’s computational graph.

We use statistical hypothesis testing and recursive composition of descriptionlengthbased Paretofrontiers to accelerate and robustify the symbolic regression.

We use normalizing flows to enable regression of probability distributions from samples.
We describe our symbolic regression algorithm in Section 2 and test it with numerical experiments in Section 3.
2 Method
Our symbolic regression algorithm uses a divideandconquer approach as in udrescu2020ai . We directly solve a mystery in two base cases: if the mystery function is a loworder polynomial or if it is simple enough to be discovered by bruteforce search. Otherwise, we recursively try the strategies that we will now describe for replacing it by one or more simpler mysteries, ideally with fewer input variables.
2.1 Leveraging graph modularity against the curse of dimensionality
When we define and evaluate a mathematical function, we typically represent it as composed of some basis set of simpler functions. As illustrated in Figure 2 (middle panel), this representation can be specified as a graph whose nodes contain elements of . The most popular basis functions in the scientific literature tend to be functions of two variables (such as or ), one variable (such as or ) or no variables (constants such as or ). For many functions of scientific interest, this graph is modular in the sense that it can be partitioned in terms of functions with fewer input variables, as in Figure 2 (right panel).
A key strategy of our symbolic regression algorithm is to recursively discover such modularity, thereby reverseengineering the computational graph of a mystery function, starting with no information about it other than an inputoutput data table. This is useful because there are exponentially many ways to combine
basis functions into a module, making it extremely slow and difficult for bruteforce or genetic algorithms to discover the correct function when
is large. Our divideandconquer approach of first breaking the function into smaller modules with smaller that can be solved separately thus greatly accelerates the solution. We implement this modularity discovery algorithm in two steps:
Use the userprovided data table to train a neural network that accurately approximates the mystery function .

Perform numerical experiments on to discover graph modularity.
Specifically, we test for the six types of graph modularity illustrated in Figure 3 and listed in Table 1, and choose between the discovered candidates as described in Section 2.2. Our method for discovering separability is described in udrescu2020ai . As we will see below, all our other types of graph modularity (compositionality, symmetry and generalized additivity) can be revealed by , the gradient of our mystery function .
Name  Property  Action 
Negativity  Solve for  
Positivity  Solve for  
Additive  
separability  Solve for &  
Multiplicative  
separability  Solve for &  
Simple symmetry  Solve for  
Compositionality  Find with  
Generalized  Find satisfying  
symmetry  ,  
Generalized  Solve for , &  
additivity  
Zerosnap  has numerical parameters p  Replace by 
Integer snap  has numerical parameters p  Round to integer 
Rational snap  has numerical parameters p  Round to fraction 
Reoptimize  has numerical parameters p  Reoptimize p to 
minimize inaccuracy 
Compositionality
Let us first consider the case of compositionality (Figure 3, top right), where and is a scalar function simpler than in the sense of being expressible with a smaller graph as in Figure 3
. By the chain rule, we have
(1) 
where hats denote unit vectors:
, etc. This means that if we can discover a function whose gradient is proportional to that of (which we will describe a process for in Section 2.2), then we can simply replace the variables x in the original mystery data table by the single variable and recursively apply our AI Feynman algorithm to the new onedimensional symbolic regression problem of discovering .Generalized symmetry
Let us now turn to generalized symmetry (Figure 3, bottom left), where of the arguments enter only via some scalar function of them. Specifically, we say that an has generalized symmetry if the components of the vector can be split into groups of and components (which we denote by the vectors and ) such that for some function . By the chain rule, we have
(2) 
where denotes the derivative of with respect to its first argument. This means that is independent of , which it would not be for a generic function . independence of the normalized gradients thus provides a smoking gun signature of generalized symmetry. Whereas our compositionality discovery above requires discovering an explicit function , we can discover generalized symmetry without knowing , thus only performing the timeconsuming task of searching for an satisfying equation (2) after determining that a solution exists. The Supplementary Material details how we numerically test for independence of .
Generalized additivity
If is a function of two variables, then we also test for generalized additivity (Figure 3, bottom right), where . If we define the function
(3) 
if satisfies the generalized additivity property. In other words, we simply need to test if is of the multiplicatively separable form , and we do this using a variant of the separability test described in udrescu2020ai . The Supplementary Material details how we perform this separability test numerically.
2.2 Robustness through recursive Paretooptimality
As illustrated in Figure 1, the goal of our symbolic regression of a data set is to approximate by functions that are not only accurate, but also simple, in the spirit of Occam’s razor. As in schmidt2009distilling , we seek functions that are Paretooptimal in the sense of there being no other function that is both simpler and more accurate. We will adopt an informationtheoretical approach and use bits of information to measure lack of both accuracy and simplicity.
For accuracy, we wish the vector of prediction errors to be small. We quantify this not by the meansquared error or maxerror as in schmidt2009distilling ; udrescu2020ai , but by the MEDL, the mean errordescriptionlength defined in Table 2. As argued in wu2019toward and illustrated in Figure 4, this improves robustness to outliers. We analogously quantify complexity by the description length defined as in wu2019toward , summarized in Table 2.
can be viewed as a crude but computationally convenient approximation of the number of bits needed to describe each object, made differentiable where possible. We choose the precision floor . For function complexity, both input variables and mathematical functions (e.g., and +) count toward and . For example, the classical kinetic energy formula has bits, since the formula contains basis functions (, , and ) used times.
Object  Symbol  Description length 
Natural number  
Integer  
Rational number  
Real number  
Parameter vector  p  
Parametrized function  ; basis functions appear k times 
We wish to make the symbolic regression implementation of udrescu2020ai more robust; it sometimes fails to discover the correct expression because of noise in the data or inaccuracies introduced by the neural network fitting. The neural network accuracy may vary strongly with x
, becoming quite poor in domains with little training data or when the network is forced to extrapolate rather than interpolate, and we desire a regression method robust to such outliers. We expect our insistence on Paretooptimal functions in the information plane of Figure
1 to increase robustness, both because is robust (Figure 4) and because noise and systematic errors are unlikely to be predictable by a simple mathematical formula with small . More broadly, minimization of total exact description length (whichcrudely approximates) provably avoids the overfitting problem that plagues many alternative machinelearning strategies
rissanen1978modeling ; hutter2000theory ; grunwald2005advances .Speedup by recursive Pareto frontier composition
When recursively symbolically regressing various modules (see Figure 2), we end up with a Pareto frontier of candidate functions for each one. If there are functions on the frontier, then combining them all would produce candidates for the original function . We speed up our algorithm by Paretopruning after each merge step: whenever two modules are combined (via composition or multiplication, say), the resulting functions are pruned by removing all functions that are Paretodominated by another function that is both simpler and more accurate. Pruning models on the Pareto frontier significantly reduces the number of models that need to be evaluated, since in typical scenarios, the number of Paretooptimal points grows only logarithmically with the total number of points.
Robust speedup of bruteforce graph search with hypothesis testing
Our recursive reduction of regression mysteries into simpler ones terminates at the base case when the mystery function has only one variable and cannot be further modularized. As in udrescu2020ai , we subject these (and also all multivariate modules) to two solution strategies, polynomial fitting up to some degree ( by default) and brute force search, and then add all candidates functions to the Pareto plane and prune as above. The bruteforce search would, if run forever, try all symbolic expressions by looping over evermorecomplex graphs (the middle panel of Figure 2 shows an example) and over function options for each node.
Our bruteforce computation of the Pareto frontier simply tries all functions () in order of increasing complexity and keeps only those with lower mean errordescriptionlength than the previous record holder, where . When instead fitting normalized gradient vectors as in Section 2.1, we define to handle the sign ambiguity. The bad news is that computing exactly is slow, requiring evaluation of for all data points . The good news is that this is usually unnecessary, since for the vast majority of all candidate functions, it becomes obvious that they provide a poor fit after trying merely a handful of data points. We therefore accelerate the search via the following procedure. Before starting the loop over candidate functions, we sort the data points in random order to be able to interpret the numbers as random samples from a probability distribution whose mean is the soughtfor
and whose standard deviation is
. Let and denote the corresponding quantities that were computed for the previous bestfit function we added to the Pareto frontier. We make the simplifying approximations thatand that all errors are uncorrelated, so that the loss estimate from the first
data points has mean and standard deviation . We now test our candidate function on one data point at a time and reject it as soon as(4) 
where
is a hyperparameter that we can interpret as the “number of sigmas" we require to rule out a candidate function as viable when its average error exceeds the previous record holder. We find that
usually works well, generically requiring no more than a handful of evaluations per candidate function asymptotically. We can further increase robustness by increasing at the price of longer runtime.Speedup by greedy search of simplification options
We do not a priori know which of the modular decompositions from Figure 3 are most promising, and recursively trying all combinations of them would involve trying exponentially many options. We therefore accelerate our algorithm with a greedy strategy where at each step we compare the decomposition in a unified way and try only the most accurate one — our runtime thus grows roughly linearly with , the number of input variables. stays constant along constant curves for generalized symmetry, simple symmetry (where , , or ) and generalized additivity (where ). We thus test the accuracy of all such candidates by starting at a datapoint and computing an error for some satisfying . For additive and multiplicative separability, we follow udrescu2020ai by examining a rectangle in parameter space and predicting at the fourth corner from the other three, defining as the mismatch. The supplementary material details how our test points are chosen.
After this greedy recursive process has terminated, we further improve the Pareto frontier in two ways. We first add models where rational numbers are replaced by reals and optimized by gradient descent to fit the data. We then add models with zerosnap, integersnap and rationalsnap from Table 1 applied to all realvalued parameters as described in wu2019toward , pruning all Paretodominated models after each step. For example, if there are 3 realvalued parameters, integersnap generates 3 new models where the 1, 2 and 3 parameters closest to integers get rounded, respectively.
2.3 Leveraging normalizing flows to symbolic regress probability distributions
An important but more difficult symbolic regression problem is when the unknown function is a probability distribution from which we have random samples rather than direct evaluations . We tackle this by adding preceding the regression by a step that estimates . For this step, we use the popular normalizing flow technique rezende15 ; dinh2017 ; kingma2018 ; durkan2019 ; kobyzev2020 , training an invertible neural network mapping such that
has a multivariate normal distribution
as illustrated in Figure 5. We then obtain our estimator , where is the Jacobian of .We find rationalquadratic neural spline flows (RQNSF) suitable for relatively lowdimensional applications due to their enhanced expressivity. Specifically, we used three steps of the RQNSF with RQNSF (C) coupling layers as described in durkan2019
, parametrized by three 16neuron softplus layers, trained for
epochs with the Adam optimizer. The learning rate was initialized to and halved every time the test loss failed to improve for epochs.3 Results
We now turn to quantifying the performance of our method with numerical experiments, comparing it with that of udrescu2020ai which recently exceded the previous state oftheart performance of schmidt2009distilling . To quantify robustness to noise, we add Gaussian noise of standard deviation to and determine the largest integer for which the method successfully discovers the correct mystery function . As seen in Table 3, our method solves 73 of 100 the baseline problems from the Feynman Symbolic Regression Database udrescu2020ai with , and is typically 13 orders of magnitude more robust than that of udrescu2020ai . Crudely speaking, we found that adding progressively more noise shifted the most accurate formula straight upward in the Pareto plane (Figure 1) until it no longer provided any accuracy gains compared with simpler approximations.
To quantify the ability of our method to discover more complex equations, we reran it on all 17 mysteries that udrescu2020ai tackled and failed to solve. We also tested a dozen new mysteries exhibiting various forms of graph modularity (see Table 4) that were all chosen before any of them were tested. Allowing at most two hours of runtime, the method of udrescu2020ai solved equations 5, 12,…,15, whereas our new method solved them all, as well as four of the outstanding mysteries from udrescu2020ai (rows 14). For these first four, our method got the numerical parameters in the right ballpark with rational approximations, then discovered their exact values through gradient descent.
Equation  Symmetries  
1  TC  
2  TS  
3  
4  P  
5  (Parallel resistors)  PGSM 
6  (RLC circuit )  MG 
7  (RLC circuit)  MG 
8  (Wheatstone bridge)  SGMA 
9  (Velocity addition)  AG 
10  (Velocity addition)  GA 
11  (norm)  AC 
12  GA  
13  A  
14  A  
15  A  
16  (Incircle)  GMAC 
To quantify the ability of our method to discover probability distributions, we tested it on samples from the ten distributions in Table 5. As seen in the table, 80% were solved, requiring between and samples . The flows trained in about 20 minutes on one CPU, scaling roughly linearly with sample size and number of network weights. The Supplementary Material details failure modes.
Distribution Name  Probability distribution  
Laplace distribution  
Beta distribution (, )  
Beta distribution (, )  
Harmonic oscillator (, )  
Sinc diffraction pattern  
2D normal distribution (correlated)  
2D harmonic oscillator (, , )  
Hydrogen orbital (, , )  
Hydrogen orbital (, , )    
Hydrogen orbital (, , )   
4 Conclusions
We have presented a symbolic regression method that exploits neural networks, graph modularity, hypothesis testing and normalizing flows. It improves stateoftheart performance both by being more robust towards noise and by solving harder problems, including symbolic density estimation. These core ideas can enable further improvements. For example, gradients can reveal more types of graph modularity than the Figure 3
examples that we exploited; additional simplification strategies can be included in the Paretooptimal recursion; and flowbased regression can be used for regularized density estimation from sparse highdimensional data. Larger and more challenging collections of sciencebased equations are needed to benchmark and inspire improved algorithms.
Paretooptimal symbolic regression has the power to not only discover exact formulas, but also approximate ones that are useful for being both accurate and simple. The mainstream view is that all known science formulas are such approximations. We live in a golden age of research with everlarger datasets produced by both experiments and numerical computations, and we look forward to a future when symbolic regression is as ubiquitous as linear regression is today, helping us better understand the relations hidden in these datasets.
The authors with to thank Philip Tegmark for helpful comments, and the Center for Brains, Minds, and Machines (CBMM) for hospitality.
Funding: This work was supported by The Casey and Family Foundation, the Ethics and Governance of AI Fund, the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science and the Templeton World Charity Foundation, Inc.
Competing interests: The authors declare that they have no competing interests.
Data and materials availability: Data and code will be publicly released when this paper is accepted for publication.
Broader Impact
Who may benefit from this research
Our research presumably has quite broad impact, since discovery of mathematical patterns in data is a central problem across the natural and social sciences. Given the ubiquity of linear regression in research, one might expect that there will significant benefits to a broad range of researchers also from more general symbolic regression once freely available algorithms get sufficiently good.
Who may be put at disadvantage from this research
Although it is possible that some numerical modelers could get their jobs automated away by symbolic regression, we suspect that the main effect of our method, and future tools building on it, will instead be that these people will simply discover better models than today.
Risk of bias, failure and other negative outcomes
Paretooptimal symbolic regression can be viewed as an extreme form of lossy data compression that uncovers the simplest possible model for any given accuracy. To the extent that overfitting can exacerbate bias, such model compression is expected to help. Moreover, since our method produces closedform mathematical formulas that have excellent interpretability compared to blackbox neural networks, they make it easier for humans to interpret the computation and pass judgement on whether it embodies unacceptable bias. This interpretability also reduces failure risk.
Another risk is automation bias, whereby people overly trust a formula from symbolic regression when they extrapolate it into an untested domain. This could be exacerbated if symbolic regression promotes scientific laziness and enfeeblement, where researchers fit phenomenological models instead of doing the work of building models based on first principles. Symbolic regression should inform but not replace traditional scientific discovery.
Although the choice of basis functions biases the discoverable function class, our method is agnostic to basis functions as long as they are mostly differentiable.
The potentially greatest risk associated with this work does not stem from it failing but from it succeeding: accelerated progress in symbolic regression, modularity discovery and its parent discipline program synthesis could hasten the arrival of artificial general intelligence, which some authors have argued that humanity still lacks the tools to manage safely russell2019human
. On the other hand, our work may help accelerate research on intelligible intelligence more broadly, and powerful future artificial intelligence is probably safer if we understand aspects of how it works than if it is an inscrutable black box.
Supplementary material
Below we provide additional technical details about how we implement our method and numerical experiments.
Appendix A Testing for generalized symmetry
We showed that generalized symmetry can be revealed by being independent of . We will now describe how we test for such independence numerically. Given a point from our data set, we compute a set of normalized gradients , where correspond to a sample of other data points, and quantify the variation between then by the quantity
(5) 
We can intuitively interpret the optimal as maximally aligned with the vectors up to a sign. Equation (5) implies that our variation measure
is simply one minus the smallest eigenvalue of
V, so ranges from when all are identical to when all eigenvalues are equal (equal to , since ). As illustrated in Figure 6, we compute for each subset of up to input variables, and select the subset with the smallest median as the most promising generalized symmetry candidate. In our numerical experiments, we set the hyperparameter to save time, since we do not wish to consider all subsets for large .Appendix B Testing for generalized additivity
We showed that generalized additivity holds when the function from Equation (3) is multiplicatively separable. We will now describe how we test for such separability numerically. If being multiplicative separable is equivalent to being additively separable. We numerically quantity this by the and we test the function for additive separability using the normalized score defining
(6) 
It is easy to see that if is additively separable functions , and otherwise. If the median value of over all points in the dataset, we take thisas evidence for generalized additivity in the dataset and proceed as below. It is important to use smooth (not, e.g.
, ReLU) activation functions in the activation function for this derivativebased test to be useful.
If this property holds, then we recursively apply our algorithm to the two new 1dimensional symbolic regression problems of discovering and . If this succeeds and we are able to discover the functions and by symbolically integrating our solutions and , then we have reduced the original problem to the same state as when we found compositionality above, now with . Just as in that case, we simply replace the variables x in the original mystery data table by the single variable and recursively apply our AI Feynman algorithm to the new 1dimensional symbolic regression problem of discovering how depends on .
If we have determined that generalized additivity holds but the above aforementioned method for discovering fails, we make a second attempt by training a neural network of the modular form to fit the data. If this succeeds, we then recursively apply our AI Feynman algorithm to the three new 1dimensional symbolic regression problems of discovering , and .
Appendix C Further details on success and failure modes
Our paper reported which symbolic regression problems our method succeeded and failed on. Here we add specifics on how these successes and failures occurred.
Success definition
Given a data set , we use 90% of the data to compute a Paretooptimal set of candidate functions , then rank them based on their MEDL accuracy on the heldback 10% of the data. We count our method as successful only if the topranked function matches the true exactly, or, if the definition of involves irrational numerical parameters, if these parameters are recovered to better than relative accuracy.
We considered an equation solved even if the top solution was not in the exact form presented in our tables, but Mathematically equivalent. For example, our method predicted that Equation (12) in Table 4 was , which is mathematically equivalent within the domain of our provided data set, where .
For the problem of density estimation from samples, our goal was to obtain the correct normalized probability distributions. The candidate functions on the Paretofrontier were therefore discarded unless they were nonnegative and normalizable. The surviving candidates then normalized to integrate to unity by symbolic/numerical integration to obtain the appropriate normalization constant, and qualityranked by the surprisal loss function
evaluated on the heldback test data.
Success examples
Tables 1 and 2 below show the highest noise level allowing allowing each of the 100 equations from the Feynman Database for Symbolic regression to be solved in the original paper analyzing it and in the present paper.
For many of the solved equations, the modularity discovery had to be used multiple times in order for the correct equation to be discovered, reflecting the power of the recursive algorithm. For example, for the quadruple velocity addition equation in Table 4, generalized symmetry was exploited twice. First, the code discovered that the first two velocities only enter in the combination , and these two variables were replaced by a new variable . The same method then discovered that and only enter in that same combination , and thus the initial 3 variables , and were replaced by a single variable . Now the remaining equation had only 2 variables left, and was solved by brute force. In principle, this recursive method can be used to discover relativistic addition of an arbitrary number of velocities, by reducing the number of variables by one at each step.
Failure examples
As mentioned in the main text, numerous equations remained unsolved, motivating further work. In some cases, the main reason for this failure was that the form of the equation did not allow our method to break it into small enough pieces. For example, for the equation
our algorithm is able to discover the multiplicative separability into terms including only and only . However, the remaining term is too complicated to be solved in a reasonable amount of time by the brute force code, and none of the graph modularity methods apply because they only help for functions of more than one variable.
For other equations, our method fails but not irreparably. For example, for the function
our code is able to discover that and can be separated from the rest of the equation. However, given that we allow the brute force code to run for only a minute at each iteration, the expression is not discovered, mainly because we did not include as one of the functions used, so the brute force would have to write that as . By allowing the code to run for longer and experimenting with additional basis functions (such as ), it is likely that the code would solve this and several other mysteries that we reported as failures.
Fitting a normalizing flow for density estimation presents a ‘realworld’ test of the symbolic regression code, which needs to cope with flow errors. We demonstrated the robustness of the code to more challenging errors (i.e. not i.i.d. Gaussian); some of the failure modes are investigated and expanded on below.
It is worth noting that our definition of complexity is dependent on the chosen set of operations and does not always match our intuition. For example, in fitting the probability distribution
of election positions in the , , hydrogen orbital, solutions with dependence are preferred over . This is because, up to additive and multiplcative prefactors, the two formulas differ by at most approximately over our parameter range, but given a set of operations that includes only denoted by "" and “" respectively in reverse Polish notation, (encoded as“") is simpler than (encoded as ). In the presence of the imprecisions introduced by the normalizing flow, we were unable to perform the density estimation a level at which the accuracy for the correct was preferred over the simpler alternative.
Furthermore, more interpretable approximations (e.g. Taylor expansions) are not always favored by our definition of complexity. For example, in Figure 1, the unfamiliar solution
intermediate to the more familiar and of can be understood as a fourthorder approximation about of the exact formula. Specifically, , and . The Taylor expansions themselves are not preferred for reasons of complexity.
Feynman eq.  Equation  Old Noise tolerance  New Noise tolerance 
I.6.20a  
I.6.20  
I.6.20b  
I.8.14  
I.9.18  
I.10.7  
I.11.19  
I.12.1  
I.12.1a  
I.12.2  
I.12.4  
I.12.5  
I.12.11  
I.13.12  
I.14.3  
I.14.4  
I.15.3x  
I.15.3t  
I.15.1  
I.16.6  
I.18.4  
I.18.12  
I.18.14  
I.24.6  
I.25.13  
I.26.2  
I.27.6  
I.29.4  
I.29.16  
I.30.3  
I.30.5  
I.32.5  
I.32.17  
I.34.8  
I.34.10  
I.34.14  
I.34.27  
I.37.4  
I.38.12  
I.39.10  
I.39.11  
I.39.22  
I.40.1  
I.41.16  
I.43.16  
I.43.31  
I.43.43  
I.44.4 
Comments
There are no comments yet.