Machine Learning of Space-Fractional Differential Equations

08/02/2018
by   Mamikon Gulian, et al.
0

Data-driven discovery of "hidden physics" -- i.e., machine learning of differential equation models underlying observed data -- has recently been approached by embedding the discovery problem into a Gaussian Process regression of spatial data, treating and discovering unknown equation parameters as hyperparameters of a modified "physics informed" Gaussian Process kernel. This kernel includes the parametrized differential operators applied to a prior covariance kernel. We extend this framework to linear space-fractional differential equations. The methodology is compatible with a wide variety of fractional operators in R^d and stationary covariance kernels, including the Matern class, and can optimize the Matern parameter during training. We provide a user-friendly and feasible way to perform fractional derivatives of kernels, via a unified set of d-dimensional Fourier integral formulas amenable to generalized Gauss-Laguerre quadrature. The implementation of fractional derivatives has several benefits. First, it allows for discovering fractional-order PDEs for systems characterized by heavy tails or anomalous diffusion, bypassing the analytical difficulty of fractional calculus. Data sets exhibiting such features are of increasing prevalence in physical and financial domains. Second, a single fractional-order archetype allows for a derivative of arbitrary order to be learned, with the order itself being a parameter in the regression. This is advantageous even when used for discovering integer-order equations; the user is not required to assume a "dictionary" of derivatives of various orders, and directly controls the parsimony of the models being discovered. We illustrate on several examples, including fractional-order interpolation of advection-diffusion and modeling relative stock performance in the S&P 500 with alpha-stable motion via a fractional diffusion equation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

08/10/2019

A comparison between Caputo and Caputo-Fabrizio fractional derivatives for modelling Lotka-Volterra differential equations

In this paper, we apply the concept of the fractional calculus to study ...
01/12/2021

Analysis of Anisotropic Nonlocal Diffusion Models: Well-posedness of Fractional Problems for Anomalous Transport

We analyze the well-posedness of an anisotropic, nonlocal diffusion equa...
03/27/2021

Why Do Big Data and Machine Learning Entail the Fractional Dynamics?

Fractional-order calculus is about the differentiation and integration o...
03/03/2020

Numerical solution of the general high-dimensional multi-term time-space-fractional diffusion equations

In this article, an advanced differential quadrature (DQ) approach is pr...
05/17/2020

Data-driven learning of robust nonlocal physics from high-fidelity synthetic data

A key challenge to nonlocal models is the analytical complexity of deriv...
11/23/2021

Stochastic Processes Under Linear Differential Constraints : Application to Gaussian Process Regression for the 3 Dimensional Free Space Wave Equation

Let P be a linear differential operator over 𝒟⊂ℝ^d and U = (U_x)_x ∈𝒟 a ...
07/16/2020

Stability and Complexity Analyses of Finite Difference Algorithms for the Time-Fractional Diffusion Equation

Fractional differential equations (FDEs) are an extension of the theory ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A novel use of machine learning, which has potential both for modeling with large or high-frequency data sets as well as advancing fundamental science, is the discovery of governing differential equations from data. Unlike highly specialized algorithms used to refine existing models, these novel methods are distinguished by comparatively limited assumptions, and the ability to produce various types of equations from a wide variety of data.

We now give a very brief (and incomplete) overview of a few proposed algorithms. Key differences between various works include the techniques used to generate candidate equations and the technique used to select equations. A fundamental problem that all these work address is that, if such algorithms are to mimic a human scientist, while generating accurate models, they must also avoid or penalize spurious “overfitted” models. In [34]

, a symbolic regression method was developed to learn conservation laws of physical systems. The laws, which could be nonlinear, were created using genetic programming and evaluated concurrently on predicative ability and parsimony (number of terms) in order to prevent overfitting. Earlier, in

[2] in a dynamical system context, the equations were evaluated using “probing” tests while the overfitting was addressed using a separate “snipping” process. In [5], a more parametric method for learning nonlinear autonomous systems was developed in which the candidate equation is build from a linear combination of elements from a user-defined “dictionary”. Parsimony translated to sparsity in the dictionary elements, so sparse regression was used to determine the coefficients. In [32], a similar approach was used to generate nonlinear partial differential evolution equations.

Building on an earlier work [26] where Gaussian process regression was used to infer solutions of equations, a Gaussian process framework was developed in [27] for parametric learning of linear differential equations, of the form

(1)

given small data on and . Here refers to a linear differential operator with parameters ; for example, could be a linear combination of differential operators with coefficients . The method placed a Gaussian Process on the function and used linearity of to obtain a joint Gaussian Process on in which the unknown parameters were subsumed as hyperparameters (this is reviewed in detail in Section 2). This allowed the use of standard continuous optimization techniques to optimize the negative log-marginal likelihood, which effects an automatic trade-off between data-fit and model complexity, and thereby find . Thus, in this approach, the problem of identifying the differential equation is “embedded” into the problem of interpolation/regression of data. Moreover, the Gaussian Process method only requires computation of the forward action of on the covariance kernel, rather than the solution of the differential equation. However, the method also required the user to select a “dictionary” of terms and assume a parametric form of the equation.

The inclusion of fractional-order operators in this framework, with the order itself is a parameter, allows for a single fractional-order operator to interpolate between derivatives of all orders. This allows the user to directly control the parsimony, by fitting a specified number of fractional derivatives to data, without making assumptions on the orders of derivatives to be discovered. Therefore, building on a basic example of a fractional-order operator treated in [27], we significantly extend the framework to treat other space-fractional differential operators and covariance kernels. The main problem that must be addressed is the efficient computation of the action of the unknown (fractional) linear operator on covariance kernels.

At the same time, fractional-order derivatives are far more than a tool to interpolate between integer-order and facilitate data-driven discovery of PDEs using continuous optimization techniques. The advances in the present article further improve the ability to discover fractional-order partial differential equations (FPDEs) from real-world data. It is now well understood that FPDEs have a profound connection with anomalous diffusions and systems driven by heavy-tailed processes (

[18], [19]). While such heavy-tailed data abounds in the fields of, e.g., hydrology, finance, and plasma physics, FPDEs are currently underutilized as tools to model macroscopic properties of such systems. This can be attributed to the analytic difficulties of deriving FPDE models; not only is there an additional parameter, but the involved formulas and nonlocal nature of fractional-order derivatives make them significantly less attractive for specialists in other fields to work with.

Machine Learning is a natural tool for ameliorating this issue. As proof of this concept, we point to the work [21], which employed a multi-fidelity Gaussian Process method to discover fractional-order of the fractional advection-dispersion equation governing underground tritium transport through a heterogeneous medium at the Macrodispersion Experimental site at Columbus Air Force Base. This resulted in improved and highly efficient fitting of variable-order fractional equation to data. Along more theoretical lines, [10] explored the determination, using Gaussian processes, of the fractional-order of an elliptic problem involving a Spectral fractional Laplacian on a bounded domain with Neumann boundary condition. In addition, the authors also proved wellposedness of the inverse Bayesian formulation of this problem in the sense of [36]. Our work differs from [21] in that we do not repeatedly solve the forward problem, and from [10] in that we do not place a prior on the fractional order or on other parameters; rather, the parameters are inferred by placing a prior and training on the solution and right-hand side of the equation. This allows for more flexibility with regard to the form of the equation and the inclusion of additional parameters.

In the aforementioned Gaussian Process framework of [27], it is possible to treat time-derivatives by considering the data and equation in space-time. Time may be treated as another dimension in the covariance kernel. An alternative method is suggested by the work of [24], in which learning of evolution equations is based on the numerical differentiation of Gaussian processes. There, the data is given at different “snap-shots” in time which are used to discretize the time-derivative. In this way, even nonlinear time-dependent equations can be discovered using correspondence between nonlinear terms and specific linearizations of the discretized system. The training is similar to that of [27], and we extend this to fractional operators as well.

This work is organized as follows. In section 2, we review the Gaussian Process framework of [27] and [24]. In section 3, we review the Matérn family of covariance kernels and space-fractional operators and present new formulas for space-fractional derivatives of such covariance kernels. Such covariance kernels can be more suited to rough data in certain applications. The inclusion of these new formulas, which can be efficiently treated using generalized Gauss-Laguerre quadrature, allows the discovery of various fractional equations in a unified way. In section 4 we present basic synthetic examples to illustrate the methodology, and in section 5 we apply the methodology to the discovery and interpolation of integer-order advection and diffusion using the fractional-order framework. This includes an interesting problem of interpolating two-term advection-diffusion using a single fractional-order term, and an exploration of user-selected parsimony. After reviewing the relation between -stable processes and fractional diffusion in section 6 and studying a synthetic example, in section 7 we apply the methodology to the modeling of relative stock performance (Intel to S&P 500) by -stable processes via an associated fractional-order diffusion equation.

2 The Gaussian Process Framework

We review the framework developed by [27] for parametric learning of linear differential equations, of the form

(2)

given data on and . Here, is a linear operator with unknown parameters . Here, and throughout the article, we use boldface characters (such as

) to denote vector-valued variables, and capital boldface characters (such as

) to denote data vectors.

Assume to be Gaussian process with mean and covariance function
with hyperparameters :

(3)

We shall be vague about the form of the covariance kernel until Section 3; for now, it suffices to say that it describes how the correlation between the values of at two points and falls off with or otherwise behaves with the two points, and that it must be a symmetric, positive semidefinite function [31]

. Then, the linear transformation

of the Gaussian process implies a Gaussian Process for (see [31], §9.4),

(4)

with covariance kernel

(5)

Moreover, the covariance between and , and between and is

(6)

respectively. By symmetry of ,

(7)

The hyperparameters of the joint Gaussian Process

(8)

are then learned by training on the data of given at points and of given at points . This is done with a Quasi-Newton optimizer L-BFGS to minimize the negative log marginal likelihood ([31]):

(9)

where , , and is given by

(10)

The additional noise parameters and are included to learn uncorrelated noise in the data; their inclusion above corresponds to the assumption that

(11)

with and independently .

Next we review the time-stepping Gaussian Process method of [24] for learning linear (in our case) equations of the form

(12)

For our purposes, we consider two “snapshots” and of the system at two times and , respectively, such that

(13)

We perform, in the case of two snapshots, a backward Euler discretization

(14)

Then we assume a Gaussian Process, but for :

(15)

As before, the linearity of leads to a Gaussian Process for . We obtain the joint Gaussian process

(16)

where, denoting the identity operator by Id,

(17)

Equation (16) can be compared to equation (8), and equation (17) to (5) and (6). The set-ups are very similar, and again, has been merged into the hypermarameters of this joint Gaussian Process. Given data at spacial points and for the functions and , represented by the vectors and , respectively, the new hyperparameters are trained by employing the same Quasi-Newton optimizer L-BFGS as before to minimize the given by equation (9). In this case, , , and is given by

(18)

Here, is an additional parameter to learn noise in the data, under the assumption

(19)

with and being independent.

3 Fractional Derivatives of Covariance Kernels

Many properties of a Gaussian Process are determined by the choice of covariance kernel . In particular, the covariance kernel encodes an assumption about the smoothness of the field that being interpolated. Stein [35]

writes “…properties of spatial interpolants depend strongly on the local behavior of the random field. In practice, this local behavior is not known and must be estimated from the same data that will be used to do the interpolation. This state of affairs strongly suggests that it is critical to select models for the covariance structures that include at least one member whose local behavior accurately reflects the actual local behavior of the spatially varying quantity under study”. Matérn kernels

(defined below), a family of stationary kernels which includes the exponential kernel for , and the squared-exponential kernel in the limit , have been widely used for this reason. A Gaussian Process with Matérn covariance kernel is -times mean square differentiable for . We have developed a computational methodology that allows for Matérn kernels of arbitrary real order to be used in our Gaussian Process, namely in (3) and (15). In fact, the parameter itself can be treated and optimized as a hyperparameter of the Gaussian Process, just as the equation parameters were in Section 2. We employ such an algorithm to explore the effect of the parameter when working with rough time series histogram data in section 6.

The main problem that arises when using fractional operators is the computation of their action on kernels such as the Matérn class, as required by equations (5) and (6) for the time-independent case and (17) for the time-dependent case. This cannot be done analytically, and requires a numerical approach, in contrast to the works [27] and [24] where (standard) differential operators applied to kernels were obtained symbolically in closed form using Mathematica. Moreover, unlike standard derivatives, fractional derivatives, whether in ,

, or on bounded subsets, are nonlocal operators typically defined by singular integrals or eigenfunction expansions that are difficult and expensive to discretize

[14], [18]. However, space-fractional derivatives in enjoy representations as Fourier multiplier operators. In other words, they are equivalent to multiplication by a function in frequency space. This representation suggests a computational method that avoids any singular integral operators or the solution of extension problems in . The downside to Fourier methods is that, if used to compute the fractional derivative of a function on , they may require quadrature of a (forward and inverse) Fourier integral. Thus, if one wishes to compute the fractional derivative of a covariance kernel , as in (5), (6), or (17), this may entail quadrature for and , and quadrature for . These dimensions for quadrature can be cut in half provided the (forward) Fourier transforms of these kernels were known analytically. Moreover, if the covariance kernel is stationary, we see in Theorem 1 below that can further be reduced from a -dimensional integral to -dimensional one.

Thus, the entire problem of kernel computation is reduced to -dimensional quadrature if the following three conditions are satisfied:

  1. The spacial differential operator is a Fourier multiplier operator:

    (20)

    This is true for a variety of fractional space derivatives:

    (21)

    Here, and throughout this article, we use the Fourier transform convention

    (22)
  2. The covariance kernel is stationary:

    (23)

    This is true of the squared-exponential kernel in one-dimension

    (24)

    as well as frequently used multivariate squared-exponential kernels, formed by multiplication

    (25)

    or addition

    (26)

    of the one-dimensional kernel. The same is true for the Matérn kernels , which have one-dimensional form

    (27)

    and the corresponding multidimensional kernels

    (28)

    The notation refers to the modified Bessel function, which is potentially confusing, but will not be an issue as we will focus on Fourier representation of in what follows.

  3. The (forward) Fourier transform of the stationary covariance kernel is known analytically. This is satisfied by the squared exponential kernel and the Matérn kernel111In the machine learning literature, authors such as [31] describe this Fourier transform as the spectral density in the context of Bochner’s theorem on stationary kernels, and write it in the equivalent form, up to Fourier transform convention: . :

    (29)

    The same is true for the multidimensional kernels and by linearity of the Fourier transform, and for and by Fubini’s theorem.

Theorem 1

Suppose conditions (1)-(3) on the covariance kernel and the operator and are satisfied. Then the fractional derivatives of the covariance kernel , , and can be computed by -dimensional integrals

(30)

For a proof of this theorem, see Appendix A.

Provided the integrals (30) can be computed numerically at the locations of the data, they can be used to build the kernel matrix and evaluate the objective function given by (9). In the Quasi-Newton L-BFGS method (discussed in Section 2) that is used to train the extended hyperparameters implicit in , we supply the gradient of the , the components of which are given by

(31)

See [31], §5.4. To obtain , we note in (30) that the derivative may be passed into the integrand and through the complex exponential factor. The resulting derivative of the product of the multiplier(s) , which contains the equation parameters, and , which contains the original kernel parameters, can be obtained symbolically as a closed-form expression. The same numerical procedure is used to evaluate the resulting integrals as for (30). The Hessian is not supplied in closed form and is approximated from evaluations of the gradient.

When training a Gaussian process, it is advantageous to standardize the data [40] so that when possible. In the framework discussed here, this reduces the difficulty of computing the kernel functions in (30) by restricting the frequency and support of the integrands. When necessary, standardization for the applications considered here can be performed by rescaling the values and positions of the data point by appropriate constants. Once the differential equation is learned for the scaled solution , the true coefficients can be obtained via inverse scaling. This is discussed in detail for an example in Section 7. During training, we expect convergence to a local minimum of the , and we have not found the optimal to depend significantly on the initial guess in our examples, but there is no guarantee of this. Uniqueness, stability, and convergence remain important open questions.

Although numerical calculation of the above integrals can be performed using Gauss-Hermite quadrature, this is not optimal as the fractional-order monomial is not smooth at the origin. To obtain faster convergence with the number of quadrature points, a superior choice is generalized Gauss-Laguerre quadrature, involving a weight function of the form for :

(32)

Here, are the Gauss-Laguerre weights, and the nodes. In practice, it is essential for to match the fractional part of the power of the monomial in the integrand , as the remainder yields a smooth function. We use the Golub-Welsch algorithm to find the nodes, but compute the weights by evaluating the generalized Gauss-Laguerre polynomial at these nodes for higher relative accuracy.

We describe two examples and discuss the convergence of the numerical Gauss-Laguerre quadrature in each one. First, consider the left-sided Riemann-Louiville derivative in one dimension, which involves in the above formulas. Owing to the symmetry of , one can write

(33)
(34)
(35)
(36)

Similarly,

(37)

These integrals call for generalized Gauss-Laguerre quadrature to be performed with for and for . Using the Matérn kernel with as an example, the convergence of the error with the number of quadrature points is shown in Table 1. The MATLAB integral function is used for reference when computing the error. The setup for working with the right-sided Riemann-Louiville derivative, or for the one-dimensional fractional Laplacian, is entirely similar.

Next we consider the fractional Laplacian in two dimensions. This involves the multiplier . We transform the integrals into polar coordinates:

(38)
(39)

The quadrature of these integrals is performed using the trapezoid rule in and generalized Gauss-Laguerre quadrature in , using for and for . Using the product multivariate Matérn kernel as an example, convergence with respect to the number of quadrature points in is show in in Table 2, where 64 quadrature points in and 8, 16, 32, 64 quadrature points in are used. Reference answers were generated by nesting the MATLAB integral function.

The examples in the following sections are built by defining the Matern kernels in Mathematica and performing derivatives with respect to the hyperparameters symbolically. The expressions are ported into MATLAB code where Gauss-Laguerre quadrature with appropriate is used to compute the (9) as well as the derivatives of the with respect to the parameters. For our purposes, 64 quadrature points in one dimension and 64x64 quadrature points (as described above) in two dimensions is sufficient.

Function
-norm of error as function of
@ number of quadrature points
8 16 32 64
1.190e-02 3.468e-04 2.917e-06 2.876e-07
1 5.126e-03 8.029e-06 3.741e-07 3.204e-07
10 8.163e-02 1.446e-02 2.378e-04 4.188e-06
4.752e-02 1.991e-03 2.822e-05 3.315e-06
1 6.179e-03 9.528e-05 2.524e-07 1.634e-07
10 6.937e-02 9.915e-03 3.456e-04 2.828e-06
1.624e-01 8.980e-03 2.277e-04 3.449e-06
1 1.006e-03 1.515e-04 9.092e-07 9.642e-07
10 6.571e-02 4.774e-03 3.370e-04 1.217e-06
Table 1: error in computing the Matérn kernel and the kernel blocks and , with , on using generalized Gauss-Laguerre quadrature.
Function
-norm of error as function of
@ number of quadrature points
8 16 32 64
5.406e-02 6.201e-04 8.248e-06 9.939e-05
1 1.368e-02 3.611e-04 2.659e-06 2.955e-06
10 7.955e-01 8.385e-02 3.873e-03 2.006e-05
2.025e-01 3.922e-03 6.705e-05 3.842e-05
1 2.769e-02 9.611e-04 8.921e-06 8.977e-06
10 3.717e-01 2.013e-02 3.340e-03 1.515e-05
4.759e-01 2.644e-02 5.154e-04 7.347e-05
1 8.117e-02 1.449e-03 8.740e-06 9.848e-06
10 8.713e-02 4.553e-02 1.535e-03 8.209e-06
Table 2: error in computing the Matérn kernel and the kernel blocks and on the square . The same correlation parameter is used for both kernels. Quadrature is performed generalized Gauss-Laguerre quadrature in the polar variable with 8, 16, 32, and 64 quadrature points, with a fixed number of 64 trapezoid rule quadrature points in .

4 A Basic Example

In this section, we will illustrate the methodology to discover the parameters and in the fractional elliptic equation

(40)

in one and two space dimensions from data on and .

Following the time-independent framework, this means that we must optimize the negative log marginal likelihood (9), where in the covariance matrix (10), the kernels are given by the integral formulas (30). In the latter formulas, the multiplier and the Fourier transform of the stationary prior kernel must be specified; the multiplier corresponding to the operator is in one dimension, and in two dimensions. We choose to use the Matérn kernel , given by (27

) in one dimension, with a tensor product (

28) of Matérn kernels in two dimensions. Thus, by equations (29), This completes the description of the covariance kernel.

In the one-dimensional case the solution/RHS pair

(41)

is used. Two data sets are generated for two experiments. In both experiments, we use the above solution/RHS pair to generate data with . The Matérn kernel with fixed is used to discover the parameters, and the initial parameters for the optimization are taken to be . In the first experiment, 7 data points at for and 11 data points for are generated via latin hypercube sampling. No noise is added. In the second experiment, 20 data points for each of and

are generated in the same way, but normal random noise of standard deviation 0.1 for

and 0.2 for is added to the data.

The GP regression for the first experiment is shown in Figure 1. The equation parameters recovered are and , which are within 1% of the true values. The GP regression for the second experiment is shown in Figure 2. There, we have also plotted the twice the standard deviation of the Gaussian process plus twice the standard deviation of the learned noise, and (in the previous experiment, these learned parameters were miniscule). The parameters learned in the second experiment are and , which are within 7% and 6% of the true values, respectively.

In the two dimensional example, the exact solution pair used to generate data is now

(42)

with Again, the Matérn parameter is used. The initial parameters for the optimization are . We take 40 data points for each of and , generated using latin hypercube sampling on . The parameters and were used to generate data. The result of the training is shown in Figure 3. The learned parameters are and , within of the true values, with hyperparameters , , and .

These examples demonstrate the feasiblity of implementing the fractional kernels as described in the previous sections and the accuracy of discovered parameters, even with noise or small data. Moreover, there is no theoretical difficulty in increasing the dimension of the problem. However, in additional to longer runtime for the computation of formulas (30), the user should expect significantly more data to be required for accurate parameter estimation. For example, performing the same two-dimensional example above, with only 20 data points for each of and , results in learned parameters and , the parameter exhibiting an error of roughly 20%. In the analogous one-dimensional example (Fig. 1), a error is obtained for less than half this number of data points.

In concluding this section, we point that since the estimation of the equation parameters is based on accurate Gaussian process regression through the training data, there are various well-known hazards to avoid. In addition to obvious issues such as unresolved data and data that is too noisy, too much data in featureless “flat” regions carries the risk of overfitting, and should be avoided.

Figure 1: Result of training the GP in the one-dimensional example on 7 noise-free data points for and 11 data points for on . The data can be located at arbitrary positions. The trained parameters are are within 1% of the true values. Optimization wall time: 7 minutes. Roughly 1400 function evaluations. All reported wall times in this article are obtained using four threads (default MATLAB vectorization) on an Intel i7-6700k at stock frequency settings.

Figure 2: Result of training the GP on 20 noisy data points for each of and on . The trained parameters are within 7% percent of the true values. For this example, we have also plotted in green two standard deviations of the GP, plus two times the learned noise parameter . Optimization wall time: 38 seconds. Roughly 100 function evaluations.

Figure 3: Result of training in the two-dimensional example. top: Distribution of 40 data points on each of , left, and , right. middle: Error between mean of the trained GP for , left, and , right, and the exact used to generate data. bottom: Standard deviation of the GP for , left, and , right. Note that the positions of data points for are illustrated by squares, while data points for are illustrated by circles. Optimization wall time: 74 minutes. Roughly 1400 function evaluations.

5 Discovering and Interpolating Integer Order Models

As discussed in the introduction, fractional-order differential operators interpolate between classical differential operators of integer order, reducing the task of choosing a “dictionary” of operators of various orders and the assumptions that this entails. The user controls the parsimony directly, by choosing the number of fractional terms in the candidate equation. For example, the model in Section 4

was constrained to be of parsimony one, since it includes a single differential operator of fractional order. This raises several questions, such as: Can the method be used to discover integer-order operators when they drive the true dynamics? What can be expected if the user-selected parsimony is lower than the parsimony of the “true” model driving the data? Can lower-parsimony models still be used to model dynamics? To explore these questions, we consider the parametrized model

(43)

for . where the left-sided Riemann-Louiville derivative was defined in (21). Note that

(44)

For , equation (43) reduces to the advection equation (with speed )

(45)

while for , it reduces to the diffusion equation (with diffusion coefficient )

(46)

We perform four experiments. Importantly, we learn these equations using the time-stepping methodology, optimizing (9), where the kernel blocks are given by (17) and (18). We take (effectively a squared-exponential kernel) and again use (30) with generalized Gauss-Laguerre quadrature to evaluate the action of the fractional derivatives on . In all of experiments, we generate data that satisfies the initial condition . We choose and ; thus, . The GP is trained on 30 data points for each of these time slices. The experiments are

  1. Data generated from , the solution to the advection equation (45). We learn the order and coefficient in (43); the exact .

  2. Data generated from , the solution to the heat equation (46). We learn the order and coefficient in (43); the exact .

  3. Data generated from , the solution to the integer order advection-diffusion equation

    (47)

    We learn the order and coefficient in (43). Note that this archetype is limited to only one space-differential operator; we will see that the algorithm will select best order to capture both the advection and diffusion in the data.

  4. The same advection-diffusion data as in experiment 3, but with the two term, four parameter candidate equation

    (48)

We note that all of these experiments call for only a single fractional-order dictionary term; even Experiment 4 uses two copies of the same archetype. Experiments 1-3 use initial parameters and , and Experiment 4 uses , .

In Experiments 1 and 2, we note that fractional-order parameters are discovered close to (within 5% of) the true integer-order parameters. In this sense, the true dynamics can be considered recovered. The numerical difference from the true parameters despite the high number of data points (30 per slice) is likely due to approximation error from the backwards Euler approximation (14), and may be resolved by using higher-order differentiation with more time slices [25], as simply taking to be extremely small may cause the optimization to be dominated by numerical error in computing the kernels.

Experiment 3 shows what occurs when the user-defined parsimony is less than the true dynamics used to generate data. The optimizer still converges, and to a sensible answer – the result can be interpreted as an interpolation of the two integer-order operators in the true dynamics. Moreover, as shown in Figure 4, near the time of the data used to train the model, and even much later, the fractional dynamics are a good approximation to the true dynamics, while being simpler in the sense of being driven by only one spatial derivative. This leads to potential applications of interpolating complex systems using lower-parsimony fractional models.

In Experiment 4, the parsimony was increased with the inclusion of an additional indepedent copy of the fractional archetype. The advection-diffusion equation is recovered with parameters within of the true values. Thus, with a single fractional archetype and user-controlled parsimony, it is possible to discover advection, diffusion, advection-diffusion, as well as a single-term fractional interpolation of advection-diffusion.

Exp. Data Candidate Parameters Learned
1 Advection: