Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations

08/02/2017 ∙ by Maziar Raissi, et al. ∙ 0

While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

READ FULL TEXT VIEW PDF

Authors

page 8

page 10

page 12

page 16

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There are more than a trillion sensors in the world today and according to some estimates there will be about 50 trillion cameras worldwide within the next five years, all collecting data either sporadically or around the clock. However, in scientific experiments, quality and error-free data is not easy to obtain – e.g., for system dynamics characterized by bifurcations and instabilities, hysteresis, and often irreversible responses. Admittedly, as in all everyday applications, in scientific experiments too, the volume of data has increased substantially compared to even a decade ago but analyzing big data is expensive and time-consuming. Data-driven methods, which have been enabled in the past decade by the availability of sensors, data storage, and computational resources, are taking center stage across many disciplines of science. We now have highly scalable solutions for problems in object detection and recognition, machine translation, text-to-speech conversion, recommender systems, and information retrieval. All of these solutions attain state-of-the-art performance when trained with large amounts of data. However, purely data driven approaches for machine learning present difficulties when the data is scarce relative to the complexity of the system. Hence, the ability to learn in a sample-efficient manner is a necessity in these data-limited domains. Less well understood is how to leverage the underlying physical laws and/or governing equations to extract patterns from small data generated from highly complex systems. In this work, we propose a modeling framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by partial differential equations with the datasets available in many fields of engineering, science, and technology. This paper should be considered a direct continuation of a preceding one

raissi2017numerical in which we addressed the problem of inferring solutions of time dependent and nonlinear partial differential equations using noisy observations. Here, a similar methodology is employed to deal with the problem of learning, system identification, or data-driven discovery of partial differential equations Rudye1602614.

2 Problem Setup

Let us consider parametrized and nonlinear partial differential equations of the general form

(1)

where denotes the latent (hidden) solution, is a nonlinear operator parametrized by , and is a subset of . As an example, the one dimensional Burgers’ equation corresponds to the case where and . Here, the subscripts denote partial differentiation in either time or space. Given noisy measurements of the system, one is typically interested in the solution of two distinct problems. The first problem is that of inference or filtering and smoothing, which states: given fixed model parameters what can be said about the unknown hidden state of the system? This question is the topic of a preceding paper raissi2017numerical of the authors in which we introduce the concept of numerical Gaussian processes and address the problem of inferring solutions of time dependent and nonlinear partial differential equations using noisy observations. The second problem is that of learning, system identification, or data driven discovery of partial differential equations Rudye1602614 stating: what are the parameters that best describe the observed data? Here we assume that all we observe are two snapshots and of the system at times and , respectively, which are apart. The main assumption is that is small enough so that we can apply the backward Euler time stepping scheme111For a general treatment of arbitrary linear multi-step methods as well as Runge-Kutta time stepping schemes we would like to refer the readers to raissi2017numerical. to equation (1) and obtain the discretized equation

(2)

Here, is the hidden state of the system at time . Approximating the nonlinear operator on the left-hand-side of equation (2) by a linear one we obtain

(3)

For instance, the nonlinear operator

involved in the Burgers’ equation can be approximated by the linear operator

where is the state of the system at the previous time .

3 The Basic Model

Similar to Raissi et al. Raissi2017736; raissi2017machine

, we build upon the analytical property of Gaussian processes that the output of a linear system whose input is Gaussian distributed is again Gaussian. Specifically, we proceed by placing a Gaussian process

222Gaussian processes (see Rasmussen06gaussianprocesses; murphy2012machine

) provide a flexible prior distribution over functions and enjoy analytical tractability. They can be viewed as a prior on one-layer feed-forward Bayesian neural networks with an infinite number of hidden units

neal2012bayesian. Gaussian processes are among a class of methods known as kernel machines (see vapnik2013nature; scholkopf2002learning; tipping2001sparse) and are analogous to regularization approaches (see tikhonov1963solution; Tikhonov/Arsenin/77; poggio1990networks).
prior over the latent function ; i.e.,

(4)

Here, denotes the hyper-parameters of the covariance function . Without loss of generality, all Gaussian process priors used in this work are assumed to have a squared exponential333From a theoretical point of view, each kernel (i.e., covariance function) gives rise to a Reproducing Kernel Hilbert Space (RKHS) aronszajn1950theory; saitoh1988theory; berlinet2011reproducing that defines a class of functions that can be represented by this kernel. In particular, the squared exponential covariance function implies smooth approximations. For a more systematic treatment of the kernel-selection problem we would like to refer the readers to duvenaud2013structure; grosse2012exploiting; malkomes2016bayesian. Furthermore, more complex function classes can be accommodated by employing nonlinear warping of the input space to capture discontinuities calandra2016manifold; raissi2016deep. covariance function, i.e.,

where are the hyper-parameters and is a

-dimensional vector. The Gaussian process prior assumption (

4) along with equation (3) enable us to capture the entire structure of the operator in the resulting multi-output Gaussian process

(5)

It is worth highlighting that the parameters of the operators and turn into hyper-parameters of the resulting covariance functions. The specific forms of the kernels444It should be noted that for all examples studied in this work the kernels are generated at the push of a button using Wolfram Mathematica, a mathematical symbolic computation program.

are direct functions of equation (3) as well as the prior assumption (4); i.e.,

We call the multi-output Gaussian process (5) a hidden physics model, because its matrix of covariance functions explicitly encodes the underlying laws of physics expressed by equations (1) and (3).

4 Learning

Given the noisy data and on the latent solution at times and , respectively, the hyper-parameters of the covariance functions and more importantly the parameters of the operators and can be learned by employing a Quasi-Newton optimizer L-BFGS liu1989limited to minimize the negative log marginal likelihood Rasmussen06gaussianprocesses

(6)

where , , and is given by

Here, is the total number of data points in . Moreover, is included to capture the noise in the data and is also learned by minimizing the negative log marginal likelihood. The implicit underlying assumption is that and with and being independent. The negative log marginal likelihood (6) does not simply favor the models that fit the training data best. In fact, it induces an automatic trade-off between data-fit and model complexity. Specifically, minimizing the term in equation (6) targets fitting the training data, while the log-determinant term penalizes model complexity. This regularization mechanism automatically meets the Occam’s razor principle rasmussen2001occam which encourages simplicity in explanations. The aforementioned regularization mechanism of the negative log marginal likelihood (6) effectively guards against overfitting and enables learning the unknown model parameters from very few555Regularization is important even in data abundant regimes as witnessed by the recently growing literature on discovering ordinary and partial differential equations from data using sparse regression techniques brunton2016discovering; Rudye1602614. noisy observations. However, there is no theoretical guarantee that the negative log marginal likelihood does not suffer from multiple local minima. Our practical experience so far with the negative log marginal likelihood seems to indicate that local minima are not a devastating problem, but certainly they do exist. Moreover, it should be highlighted that, although not pursued here, a fully Bayesian stuart2010inverse and more robust estimate of the linear operator parameters can be obtained by assigning priors on

. However, this would require more costly sampling procedures such as Markov Chain Monte Carlo (see

Rasmussen06gaussianprocesses, chapter 5) to train the model. Furthermore, the most computationally intensive part of learning using the negative log marginal likelihood (6) is associated with inverting dense covariance matrices . This scales cubically with the number of training data in . While it has been effectively addressed by the recent works of snelson2006sparse; hensman2013gaussian; raissi2017parametric, this cubic scaling is still a well-known limitation of Gaussian process regression.

5 Results

The proposed framework provides a general treatment of time-dependent and nonlinear partial differential equations, which can be of fundamentally different nature. This generality will be demonstrated by applying the algorithm to a dataset originally proposed in Rudye1602614, where sparse regression techniques are used to discover partial differential equations from time series measurements in the spatial domain. This dataset covers a wide range of canonical problems spanning a number of scientific domains including the Navier-Stokes, Schrödinger, and Kuramoto-Sivashinsky equations. Moreover, all data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/HPM.

5.1 Burgers’ Equation

Burgers’ equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow basdevant1986spectral. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term. Burgers’ equation, despite its relation to the much more complicated Navier-Stokes equations, does not exhibit turbulent behavior. However, for small values of the viscosity parameters, Burgers’ equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension the equation reads as

(7)

with being the unknown parameters. The original data-set proposed in Rudye1602614 contains 101 time snapshots of a solution to the Burgers’ equation with a Gaussian initial condition, propagating into a traveling wave. The snapshots are apart. The spatial discretization of each snapshot involves a uniform grid with 256 cells. As depicted in figure 1 using only two of these snapshots (randomly selected) with 71 and 69 data points, respectively, the algorithm is capable of identifying the correct parameter values up to a relatively good accuracy. It should be noted that we are using only data points out of a total of in the original data set. This surprising performance is achieved at the cost of explicitly encoding the underlying physical laws expressed by the Burgers’ equation in the covariance functions of the hidden physics model (5). For a systematic study of the performance of the method, let us carry out the same experiment as the one illustrated in figure 1 for every pair of consecutive snapshots in the original dataset. We are still using the same number of data points (i.e., 71 and 69) for each pair of snapshots, albeit in different locations. The resulting statistics for the learned parameter values are reported in table 1. As is clearly demonstrated in this table, more noise in the data leads to less confidence in the estimated values for the parameters. Moreover, let us recall the main assumption of this work that the gap between the pair of snapshots should be small enough so that we can employ the backward Euler scheme (see equation (2)). To test the importance of this assumption, let us use the exact same setup as the one explained in figure 1, but increase . The results are reported in table 2. Therefore, the most important facts about the proposed methodology are that more data, less noise, and a smaller gap between the two snapshots enhance the performance of the algorithm.

Figure 1: Burgers’ equation: A solution to the Burgers’ equation is depicted in the top panel. The two white vertical lines in this panel specify the locations of the two randomly selected snapshots. These two snapshots are apart and are plotted in the middle panel. The red crosses denote the locations of the training data points. The correct partial differential equation along with the identified ones are reported in the lower panel.
Clean Data Noise Noise

First Quartile

1.0247 0.0942 0.9168 0.0784 0.3135 0.0027
Median 1.0379 0.0976 1.0274 0.0919 0.8294 0.0981
Third Quartile 1.0555 0.0987 1.1161 0.1166 1.2488 0.1543
Table 1: Burgers’ equation: Resulting statistics for the learned parameter values.
Clean Data 1.0283 1.1438 1.2500 1.2960
0.1009 0.0934 0.0694 0.0431
Noise 1.0170 1.1470 1.2584 1.3063
0.0935 0.0939 0.0711 0.0428
Table 2: Burgers’ equation: Effect of increasing the gap between the pair of snapshots.

5.2 The KdV Equation

As a mathematical model of waves on shallow water surfaces one could consider the Korteweg-de Vries (KdV) equation. This equation can also be viewed as Burgers’ equation with an added dispersive term. The KdV equation has several connections to physical problems. It describes the evolution of long one-dimensional waves in many physical settings. Such physical settings include shallow-water waves with weakly non-linear restoring forces, long internal waves in a density-stratified ocean, ion acoustic waves in a plasma, and acoustic waves on a crystal lattice. Moreover, the KdV equation is the governing equation of the string in the Fermi-Pasta-Ulam problem dauxois2008fermi in the continuum limit. The KdV equation reads as

(8)

with being the unknown parameters. The original dataset proposed in Rudye1602614 contains a two soliton solution to the KdV equation with 512 spatial points and 201 time-steps. The snapshots are apart. As depicted in figure 2 using only two of these snapshots (randomly selected) with 111 and 109 data points, respectively, the algorithm is capable of identifying the correct parameter values up to a relatively good accuracy. In particular, we are using out of a total of data points in the original data set. This level of efficiency is a direct consequence of equation (5) where the covariance functions explicitly encode the underlying physical laws expressed by the KdV equation. As a sensitivity analysis of the reported results, let us perform the same experiment as the one illustrated in figure 2 for every pair of consecutive snapshots in the original dataset. We are still using the same number of data points (i.e., 111 and 109) for each pair of snapshots, albeit in different locations. The resulting statistics for the learned parameter values are reported in table 3. As is clearly demonstrated in this table, more noise in the data leads to less confidence in the estimated values for the parameters. Moreover, to test the sensitivity of the results with respect to the gap between the two time snapshots, let us use the exact same setup as the one explained in figure 2, but increase . The results are reported in table 4. These results verify the most important facts about the proposed methodology that more data, less noise, and a smaller gap between the two snapshots enhance the performance of the algorithm.

Figure 2: The KdV equation: A solution to the KdV equation is depicted in the top panel. The two white vertical lines in this panel specify the locations of the two randomly selected snapshots. These two snapshots are apart and are plotted in the middle panel. The red crosses denote the locations of the training data points. The correct partial differential equation along with the identified ones are reported in the lower panel.
Clean Data Noise Noise
First Quartile 5.7783 0.9299 5.3358 0.7885 3.7435 0.2280
Median 5.8920 0.9656 5.5757 0.8777 4.5911 0.6060
Third Quartile 6.0358 1.0083 5.7840 0.9491 5.5106 0.8407
Table 3: The KdV equation: Resulting statistics for the learned parameter values.
Clean Data 6.1145 5.8948 5.4014 4.1779 3.5058
1.0470 0.9943 0.8535 0.4475 0.1816
Noise 5.7224 5.8288 5.4054 4.1479 3.4747
0.9578 0.9801 0.8563 0.4351 0.1622
Table 4: The KdV equation: Effect of increasing the gap between the pair of snapshots.

5.3 Kuramoto-Sivashinsky Equation

The Kuramoto-Sivashinsky equation hyman1986kuramoto; shraiman1986order; nicolaenko1985some has similarities with Burgers’ equation. However, because of the presence of both second and fourth order spatial derivatives, its behavior is far more complicated and interesting. The Kuramoto-Sivashinsky is a canonical model of a pattern forming system with spatio-temporal chaotic behavior. The sign of the second derivative term is such that it acts as an energy source and thus has a destabilizing effect. The nonlinear term, however, transfers energy from low to high wave numbers where the stabilizing fourth derivative term dominates. The first derivation of this equation was by Kuramoto in the study of reaction-diffusion equations modeling the Belousov-Zabotinskii reaction. The equation was also developed by Sivashinsky in higher space dimensions in modeling small thermal diffusive instabilities in laminar flame fronts and in small perturbations from a reference Poiseuille flow of a film layer on an inclined plane. In one space dimension it has also been used as a model for the problem of Bénard convection in an elongated box, and it may be used to describe long waves on the interface between two viscous fluids and unstable drift waves in plasmas. In one space dimension the Kuramoto-Sivashinsky equation reads as

(9)

where are the unknown parameters. The original dataset proposed in Rudye1602614 contains a direct numerical solution of the Kuramoto-Sivashinsky equation with 1024 spatial points and 251 time-steps. The snapshots are apart. As depicted in figure 3 using only two of these snapshots (randomly selected) with 301 and 299 data points, respectively, the algorithm is capable of identifying the correct parameter values up to a relatively good accuracy. In particular, we are using out of a total of data points in the original data set. This is possible because of equation (5) where the covariance functions explicitly encode the underlying physical laws expressed by the Kuramoto-Sivashinsky equation. For a sensitivity analysis of the reported results, let us perform the same experiment as the one illustrated in figure 3 for every pair of consecutive snapshots in the original dataset. We are still using the same number of data points (i.e., 301 and 299) for each pair of snapshots, albeit in different locations. The resulting statistics for the learned parameter values are reported in table 5. As shown in this table, more noise in the data leads to less confidence in the estimated parameter values. Moreover, to test the sensitivity of the results with respect to the gap between the two time snapshots, let us use the exact same setup as the one explained in figure 3, but increase . The results are reported in table 6. These results indicate that more data, less noise, and a smaller gap between the two snapshots enhance the performance of the algorithm.

Figure 3: Kuramoto-Sivashinsky equation: A solution to the Kuramoto-Sivashinsky equation is depicted in the top panel. The two white vertical lines in this panel specify the locations of the two randomly selected snapshots. These two snapshots are apart and are plotted in the middle panel. The red crosses denote the locations of the training data points. The correct partial differential equation along with the identified ones are reported in the lower panel.
Clean Data Noise Noise
First Quartile 0.9603 0.9829 0.9711 0.7871 0.8095 0.5891 -0.0768 0.0834 -0.0887
Median 0.9885 1.0157 0.9970 0.8746 0.9124 0.8798 0.4758 0.5539 0.4086
Third Quartile 1.0187 1.0550 1.0314 0.9565 0.9948 0.9553 0.6991 0.7644 0.7009
Table 5: Kuramoto-Sivashinsky equation: Resulting statistics for the learned parameter values.
Clean Data 0.9515 0.5299 0.1757
1.0052 0.5614 0.1609
0.9803 0.5438 0.1647
Noise 0.9081 0.5124 0.1616
0.9511 0.5387 0.1436
0.9266 0.5213 0.1483
Table 6: Kuramoto-Sivashinsky equation: Effect of increasing the gap between the pair of snapshots.

5.4 Nonlinear Schrödinger Equation

The one-dimensional nonlinear Schrödinger equation is a classical field equation that is used to study nonlinear wave propagation in optical fibers and/or waveguides, Bose-Einstein condensates, and plasma waves. In optics, the nonlinear term arises from the intensity dependent index of refraction of a given material. Similarly, the nonlinear term for Bose-Einstein condensates is a result of the mean-field interactions of an interacting, N-body system. The nonlinear Schrödinger equation is given by

(10)

where are the unknown parameters. Let denote the real part of and the imaginary part. Then, the nonlinear Schrödinger equation can be equivalently written as

(11)

Employing the backward Euler time stepping scheme, we obtain

(12)

The above equations can be approximated by

(13)

which involves only linear operations. Here, and are the real and imaginary parts of the state of the system at the previous time step, respectively. We proceed by placing two independent Gaussian processes on and ; i.e.,

(14)

Here, and are the hyper-parameters of the kernels and , respectively. The prior assumptions (14) along with equations (13) enable us to encode the underlying laws of physics expressed by the nonlinear Schrödinger equation in the resulting hidden physics model

(15)

The specific forms of the covariance functions involved in model (15) is a direct function of the prior assumptions (14) as well as equations (13). The hyper-parameters and along with the parameters and are learned by minimizing the negative log marginal likelihood as outlined in section 4. The original data-set proposed in Rudye1602614 contains 501 time snapshots of a solution to the nonlinear Schrödinger equation with a Gaussian initial condition. The snapshots are apart. The spatial discretization of each snapshot involves a uniform grid with 512 elements. As depicted in figure 4 using only two of these snapshots (randomly selected) with 49 and 51 data points, respectively, the algorithm is capable of identifying the correct parameter values up to a relatively good accuracy. It should be noted that we are using only data points out of a total of in the original data set. Such a performance is achieved at the cost of explicitly encoding the underlying physical laws expressed by the nonlinear Schrödinger equation in the covariance functions of the hidden physics model (15). For a systematic study of the performance of the method, let us carry out the same experiment as the one illustrated in figure 4 for every pair of consecutive snapshots in the original dataset. We are still using the same number of data points (i.e., 49 and 51) for each pair of snapshots. The resulting statistics for the learned parameter values are reported in table 7. As is clearly demonstrated in this table, more noise in the data leads to less confidence in the estimated values for the parameters. Moreover, let us recall the main assumption of this work that the gap between the pair of snapshots should be small enough so that we can employ the backward Euler scheme (see equation (12)). To test the importance of this assumption, let us use the exact same setup as the one explained in figure 4, but increase . The results are reported in table 8. Therefore, the most important facts about the proposed methodology are that more data, less noise, and a smaller gap between the two snapshots enhance the performance of the algorithm.

Figure 4: Nonlinear Schrödinger equation: A solution to the nonlinear Schrödinger equation is depicted in the top two panels. The two black vertical lines in these two panels specify the locations of the two randomly selected snapshots. These two snapshots are apart and are plotted in the two middle panels. The red crosses denote the locations of the training data points. The correct partial differential equation along with the identified ones are reported in the lower panel. Here, is the real part of and is the imaginary part.
Clean Data Noise Noise
First Quartile 0.4950 0.9960 0.3714 0.9250 -0.1186 0.6993
Median 0.5009 1.0001 0.4713 0.9946 0.4259 0.9651
Third Quartile 0.5072 1.0039 0.5918 1.0670 0.9730 1.2730
Table 7: Nonlinear Schrödinger equation: Resulting statistics for the learned parameter values.
Clean Data 0.5062 0.4981 0.3887 0.3097
0.9949 0.8987 0.7936 0.7221
Noise 0.4758 0.4976 0.3928 0.3128
0.9992 0.9011 0.7975 0.7255
Table 8: Nonlinear Schrödinger equation: Effect of increasing the gap between the pair of snapshots.

5.5 Navier-Stokes Equations

Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions666It is straightforward to generalize the proposed framework to the Navier-Stokes equations in three dimensions (3D). (2D) given explicitly by

(16)

where denotes the -component of the velocity field, the -component, and the pressure. Here, are the unknown parameters. Solutions to the Navier-Stokes equations are searched in the set of divergence-free functions; i.e.,

(17)

This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid. Applying the backward Euler time stepping scheme to the Navier-Stokes equations (16) we obtain

(18)

where and . We make the assumption that

(19)

for some latent function . Under this assumption, the continuity equation (17) will be automatically satisfied. We proceed by placing a Gaussian process prior on

(20)

where are the hyper-parameters of the kernel . This will result in the following multi-output Gaussian process

(21)

where

By construction (see equation (19)), any samples generated from this multi-output Gaussian process will satisfy the continuity equation (17). Moreover, independent from , we will place a Gaussian process prior on ; i.e.,

(22)

We linearize the backward Euler time stepping scheme by employing the states and of the system at the previous time step and writing

(23)

The above equations (23) can be rewritten as

(24)

by defining the linear operator to be given by

(25)

This will allow us to obtain the following hidden physics model encoding the structure of the Navier-Stokes equations and the backward Euler time stepping scheme in its kernels; i.e.,

(26)

where

and

The lower triangular portion of the matrix of covariance functions (26) is not shown due to symmetry. The hyper-parameters and along with the parameters are learned by minimizing the negative log marginal likelihood as outlined in section 4. As for the data, following the exact same instructions as the ones provided in kutz2016dynamic and Rudye1602614, we simulate the Navier-Stokes equations describing the two-dimensional fluid flow past a circular cylinder at Reynolds number 100 using the Immersed Boundary Projection Method taira2007immersed; colonius2008fast. This approach utilizes a multi-domain scheme with four nested domains, each successive grid being twice as large as the previous one. Length and time are nondimensionalized so that the cylinder has unit diameter and the flow has unit velocity. Data is collected on the finest domain with dimensions at a grid resolution of . The flow solver uses a 3rd-order Runge Kutta integration scheme with a time step of t = 0.02, which has been verified to yield well-resolved and converged flow fields. After simulations converge to steady periodic vortex shedding, flow snapshots are saved every . As depicted in figure 5 using only two snapshots of the velocity777It is worth emphasizing that we are not making use of any data on the pressure or vorticity fields. In practice, unlike velocity (e.g., Particle Image Velocimetry (PIV) data), obtaining direct measurements of the pressure or vorticity fields are more demanding if not impossible. Our method circumvents the need for having data on the pressure simply because of the prior assumption (21) where any samples generated from this multi-output Gaussian process satisfy the continuity equation (17). field with 251 and 249 data points, respectively, the algorithm is capable of identifying the correct parameter values up to a relatively good accuracy. It should be noted that we are using only two snapshots with a total of data points. This surprising performance is achieved at the cost of explicitly encoding the underlying physical laws expressed by the Navier-Stokes equations in the covariance functions of the hidden physics model (26). For a sensitivity analysis of the reported results, let us perform the same experiment as the one illustrated in figure 5 for 501 pairs of consecutive snapshots. We are still using the same number of data points (i.e., 251 and 249) for each pair of snapshots. The resulting statistics for the learned parameter values are reported in table 9. As is clearly demonstrated in this table, more noise in the data leads to less confidence in the estimated values for the parameters. Moreover, to test the sensitivity of the results with respect to the gap between two time snapshots, let us use the exact same setup as the one explained in figure 5, but increase . The results are reported in table 10. These results verify the most important facts about the proposed methodology that more data, less noise, and a smaller gap between the two snapshots enhance the performance of the algorithm. In particular, the results reported in table 10 indicate that to obtain more accurate estimates of the Reynolds number one needs to utilize a smaller gap between the pair of snapshots. To verify the validity of this conjecture let us decrease the gap between the pair of time snapshots while employing the exact same setup as the one explained in figure 5. The results are reported in table 11. As is clearly demonstrated in this table, a smaller leads to more accurate estimates of the Reynolds number in the absence of noise in the data. However, a smaller seems to make the algorithm more susceptible to noise in the data.

Figure 5: Navier-Stokes equations: A single snapshot of the vorticity field of a solution to the Navier-Stokes equations for the fluid flow past a cylinder is depicted in the top panel. The black box in this panel specifies the sampling region. Two snapshots of the velocity field being apart are plotted in the two middle panels. The black crosses denote the locations of the training data points. The correct partial differential equation along with the identified ones are reported in the lower panel. Here, denotes the -component of the velocity field, the -component, the pressure, and the vorticity field.
Clean Data Noise Noise
First Quartile 0.9854 0.0069 0.8323 0.0057 0.5373 0.0026
Median 0.9928 0.0077 0.8717 0.0063 0.6498 0.0030
Third Quartile 1.0001 0.0086 0.9102 0.0070 0.7619 0.0046
Table 9: Navier-Stokes equations: Resulting statistics for the learned parameter values.
Clean Data 0.9834 0.9925 0.9955 0.9976 1.0021
0.0083 0.0072 0.0058 0.0040 0.0027
Noise 0.8488 0.9298 0.9597 0.9726 0.9791
0.0140 0.0110 0.0088 0.0069 0.0053
Table 10: Navier-Stokes equations: Effect of increasing the gap between the pair of snapshots.
Clean Data 0.9834 0.9688 0.9406
0.0083 0.0091 0.0104
Noise 0.8488 0.7384 0.6107
0.0140 0.0159 0.0217
Table 11: Navier-Stokes equations: Effect of decreasing the gap between the pair of snapshots.

5.6 Fractional Equations

Let us consider the one dimensional fractional equation

(27)

where are the unknown parameters. In particular, is the fractional order of the operator that is defined in the Riemann-Liouville sense podlubny1998fractional. Fractional operators often arise in modeling anomalous diffusion processes and other non-local interactions. Integer values such as and can model classical advection and diffusion phenomena, respectively. However, under the fractional calculus setting,

can assume real values and thus continuously interpolate between inherently different model behaviors. The proposed framework allows

to be directly inferred from noisy data, and opens the path to a flexible formalism for model discovery and calibration. Applying the backward Euler time stepping scheme to equation (27) we obtain

(28)

Here, is the hidden state of the system at time . We make the prior assumption that

(29)

The prior assumption (29) along with the backward Euler scheme (28) allow us to obtain the following hidden physics model corresponding to the fractional equation (27); i.e.,

(30)

The only technicality induced by fractional operators has to do with deriving the kernels , , and . Here,

was obtained by taking the inverse Fourier transform

podlubny1998fractional of

where is the Fourier transform of the kernel . Similarly, one can obtain and . The hyper-parameters along with the parameters and are learned by minimizing the negative log marginal likelihood as outlined in section 4. We use the hidden physics model (30) to identify the long celebrated relation between Brownian motion and the diffusion equation Rudye1602614. The Fokker-Planck equation for a Brownian motion with , associated with a particle’s position, is . We simulated a Brownian motion at evenly spaced time points and generated two histograms of the particle’s displacement. These two histograms are apart. As depicted in figure 6 using only two histograms with 100 bins for each one, the algorithm is capable of identifying the correct fractional order and parameter values up to a relatively good accuracy. Moreover, let us now consider the one dimensional fractional equation

(31)

where is the unknown parameter and is the fractional Laplacian operator podlubny1998fractional. The fractional Laplacian is the operator with symbol . In other words, the Fourier transform of is given by . The fractional Laplacian operator can also be defined as the generator of -stable888Stable distributions nolan2003stable

are a rich class of probability distributions that allow skewness and heavy tails. Stable distributions have been proposed as a model for many types of physical and economic systems. In particular, it is argued that some observed quantities are the sum of many small terms – the price of a stock, the noise in a communication system, etc. – and hence a stable model should be used to describe such systems.

Lévy processes. Motivated by this observation, we simulated an -stable Lévy process chambers1976method; weron1995computer and employed the hidden physics model resulting from equation (31) to identify the fractional order . As depicted in figure 7 using only two histograms with 100 bins for each one, the algorithm is capable of identifying the correct fractional order up to a relatively good accuracy.

Figure 6: Fractional Equation – Brownian Motion: A single realization of a Brownian motion is depicted in the top panel. Two histograms of the particle’s displacement, being apart, are plotted in the middle panel. The correct partial differential equation along with the identified ones are reported in the lower panel.
Figure 7: Fractional Equation – -stable Lévy process: A single realization of an -stable Lévy process is depicted in the top panel. Two histograms of the particle’s displacement, being apart, are plotted in the middle panel. The correct partial differential equation along with the identified ones are reported in the lower panel.

6 Summary and Discussion

We have introduced a structured learning machine which is explicitly informed by the underlying physics that possibly generated the observed data. Exploiting this structure is critical for constructing data-efficient learning algorithms that can effectively distill information in the data-scarce scenarios appearing routinely when we study complex physical systems. We applied the proposed framework to the problem of identifying general parametric nonlinear partial differential equations from noisy data. This generality was demonstrated using various benchmark problems with different attributes. This work should be considered a direct follow up on raissi2017numerical in which a similar methodology was employed to infer solutions to time-dependent and nonlinear partial differential equations, and effectively quantify and propagate uncertainty due to noisy initial or boundary data. The ideas introduced in these two papers provide a natural platform for learning from noisy data and computing under uncertainty. Perhaps the most pressing limitation of this work in its present form stems from the cubic scaling with respect to the total number of training data points. However, ideas such as recursive Kalman updates hartikainen2010kalman, variational inference hensman2013gaussian, and parametric Gaussian processes raissi2017parametric can be used to address this limitation.

Moreover, the examples studied in the current work were inspired by the pioneering work recently presented in Rudye1602614. The authors of Rudye1602614 followed a sparse regression approach and a full set of spatio-temporal time series measurements consisting of thousands of data points. In contrast, here we used much smaller datasets with only hundreds of points and two snapshots of the systems. However, unlike the work in Rudye1602614, here we did not use a dictionary of all possible terms involved in the partial differential equation. We could possibly include such a dictionary in our formulation but that would make our kernel evaluations more expensive. Moreover, in some systems, e.g., in an advection-diffusion-reaction system we know most of the terms of the equation, i.e., advection and diffusion but typically the reaction term is unknown. In this case, we would seek to obtain the parameters in front of the advection-diffusion and discover the functional form of the reaction term along with any parameters using the methodology outline in this paper. In comparison to Rudye1602614, our method does not require numerical differentiation as the kernels are obtained analytically. Moreover, we do not require a regular lattice as in Rudye1602614 and can work with scattered data. An additional advantage of our approach is that it can estimate parameters appearing anywhere in the formulation of the partial differential equation while the method of Rudye1602614 is only suitable for parameters appearing as coefficients. For example, they cannot estimate the fractional order in the last example we presented in our paper or the parameters of partial differential equations (e.g., the sine-Gordon equation) involving a term like with being the parameter. Also, the treatment of the noise is somewhat complex in the method of Rudye1602614

as it involves some sort of filtering via e.g., singular value decomposition whereas our method can filter arbitrarily noisy data automatically via the Gaussian process prior assumptions. We believe that both methods can be used in different contexts effectively and we anticipate that this is only the beginning of a new way of thinking and formulating new and possibly simpler equations, e.g., by employing fractional operators that are naturally captured in our framework.

Acknowledgements

This work received support by the DARPA EQUiPS grant N66001-15-2-4055, the MURI/ARO grant W911NF-15-1-0562, and the AFOSR grant FA9550-17-1-0013. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/HPM.

References