Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations

01/20/2018
by   Maziar Raissi, et al.
0

A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers', Korteweg-de Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and Navier-Stokes equations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 13

page 14

page 15

page 18

08/02/2017

Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations

While there is currently a lot of enthusiasm about "big data", useful da...
01/04/2018

Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems

The process of transforming observed data into predictive mathematical m...
05/31/2021

Deep-Learning Discovers Macroscopic Governing Equations for Viscous Gravity Currents from Microscopic Simulation Data

Although deep-learning has been successfully applied in a variety of sci...
04/13/2019

Deep-learning PDEs with unlabeled data and hardwiring physics laws

Providing fast and accurate solutions to partial differential equations ...
01/13/2020

Universal Differential Equations for Scientific Machine Learning

In the context of science, the well-known adage "a picture is worth a th...
12/04/2018

Approximating the solution to wave propagation using deep neural networks

Humans gain an implicit understanding of physical laws through observing...
06/04/2020

Deep learning of free boundary and Stefan problems

Free boundary problems appear naturally in numerous areas of mathematics...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent advances in machine learning in addition to new data recordings and sensor technologies have the potential to revolutionize our understanding of the physical world in modern application areas such as neuroscience, epidemiology, finance, and dynamic network analysis where first-principles derivations may be intractable Rudye1602614

. In particular, many concepts from statistical learning can be integrated with classical methods in applied mathematics to help us discover sufficiently sophisticated and accurate mathematical models of complex dynamical systems directly from data. This integration of nonlinear dynamics and machine learning opens the door for principled methods for model construction, predictive modeling, nonlinear control, and reinforcement learning strategies. The literature on data-driven discovery of dynamical systems

crutchfield1987equations is vast and encompasses equation-free modeling kevrekidis2003equation, artificial neural networks raissi2018multistep; gonzalez1998identification; anderson1996comparison; rico1992discrete, nonlinear regression voss1999amplitude, empirical dynamic modeling sugihara2012detecting; ye2015equation, modeling emergent behavior roberts2014model, automated inference of dynamics schmidt2011automated; daniels2015automated; daniels2015efficient, normal form identification in climate majda2009normal, nonlinear Laplacian spectral analysis giannakis2012nonlinear, modeling emergent behavior roberts2014model, Koopman analysis mezic2005spectral; budivsic2012applied; mezic2013analysis; brunton2017chaos, automated inference of dynamics schmidt2011automated; daniels2015automated; daniels2015efficient, and symbolic regression bongard2007automated; schmidt2009distilling. More recently, sparsity tibshirani1996regression has been used to determine the governing dynamical system brunton2016discovering; mangan2016inferring; wang2011predicting; schaeffer2013sparse; ozolicnvs2013compressed; mackey2014compressive; brunton2014compressive; proctor2014exploiting; bai2014low; tran2016exact.

Less well studied is how to discover the underlying physical laws expressed by partial differential equations from scattered data collected in space and time. Inspired by recent developments in physics-informed deep learning raissi2017physics_I; raissi2017physics_II, we construct structured nonlinear regression models that can uncover the dynamic dependencies in a given set of spatio-temporal dataset, and return a closed form model that can be subsequently used to forecast future states. In contrast to recent approaches to systems identification brunton2016discovering; Rudye1602614, here we do not need to have direct access or approximations to temporal or spatial derivatives. Moreover, we are using a richer class of function approximators to represent the nonlinear dynamics and consequently we do not have to commit to a particular family of basis functions. Specifically, we consider nonlinear partial differential equations of the general form

(1)

where is a nonlinear function of time , space , solution and its derivatives.111 The solution could be dimensional in which case denotes the collection of all element-wise first order derivatives . Similarly, includes all element-wise second order derivatives . Here, the subscripts denote partial differentiation in either time or space .222 The space

could be a vector of dimension

. In this case, denotes the collection of all first order derivative and represents the set of all second order derivatives . Given a set of scattered and potentially noisy observations of the solution , we are interested in learning the nonlinear function and consequently discovering the hidden laws of physics that govern the evolution of the observed data.

For instance, let us assume that we would like to discover the Burger’s equation basdevant1986spectral in one space dimension . Although not pursued in the current work, a viable approach Rudye1602614 to tackle this problem is to create a dictionary of possible terms and write the following expansion

Given the aforementioned large collection of candidate terms for constructing the partial differential equation, one could then use sparse regression techniques Rudye1602614 to determine the coefficients and consequently the right-hand-side terms that are contributing to the dynamics. A huge advantage of this approach is the interpretability of the learned equations. However, there are two major drawbacks with this method.

First, it relies on numerical differentiation to compute the derivatives involved in equation (1

). Derivatives are taken either using finite differences for clean data or with polynomial interpolation in the presence of noise. Numerical approximations of derivatives are inherently ill-conditioned and unstable

baydin2015automatic even in the absence of noise in the data. This is due to the introduction of truncation and round-off errors inflicted by the limited precision of computations and the chosen value of the step size for finite differencing. Thus, this approach requires far more data points than library functions. This need for using a large number of points lies more in the numerical evaluation of derivatives than in supplying sufficient data for the regression.

Second, in applying the algorithm Rudye1602614 outlined above we assume that the chosen library is sufficiently rich to have a sparse representation of the time dynamics of the dataset. However, when applying this approach to a dataset where the dynamics are in fact unknown it is not unlikely that the basis chosen above is insufficient. Specially, in higher dimensions (i.e., for input or output

) the required number of terms to include in the library increases exponentially. Moreover, an additional issue with this approach is that it can only estimate parameters appearing as coefficients. For example, this method cannot estimate parameters of a partial differential equation (e.g., the sine-Gordon equation) involving a term like

with being the unknown parameter, even if we include sines and cosines in the dictionary of possible terms.

One could avoid the first drawback concerning numerical differentiation by assigning prior distributions in the forms of Gaussian processes raissi2017machine; raissi2017hidden; raissi2017inferring; raissi2017numerical or neural networks raissi2017physics_II; raissi2017physics_I to the unknown solution . Derivatives of the prior on can now be evaluated at machine precision using symbolic or automatic differentiation baydin2015automatic. This removes the requirement for having or generating data on derivatives of the solution . This is enabling as it allows us to work with noisy observations of the solution , scattered in space and time. Moreover, this approach requires far fewer data points than the method proposed in Rudye1602614 simply because, as explained above, the need for using a large number of data points was due to the numerical evaluation of derivatives.

The second drawback can be addressed in a similar fashion by approximating the nonlinear function (see equation (1)) with a neural network. Representing the nonlinear function by a deep neural network is the novelty of the current work. Deep neural networks are a richer family of function approximators and consequently we do not have to commit to a particular class of basis functions such as polynomials or sines and cosines. This expressiveness comes at the cost of losing interpretability of the learned dynamics. However, there is nothing hindering the use of a particular class of basis functions in order obtain more interpretable equations raissi2017physics_II.

2 Solution methodology

We proceed by approximating both the solution and the nonlinear function with two deep neural networks333 Representing the solution by a deep neural network is inspired by recent developments in physics-informed deep learning raissi2017physics_I; raissi2017physics_II, while approximating the nonlinear function by another network is the novelty of this work. and define a deep hidden physics model to be given by

(2)

We obtain the derivatives of the neural network with respect to time and space

by applying the chain rule for differentiating compositions of functions using automatic differentiation

baydin2015automatic. It is worth emphasizing that automatic differentiation is different from, and in several aspects superior to, numerical or symbolic differentiation; two commonly encountered techniques of computing derivatives. In its most basic description baydin2015automatic, automatic differentiation relies on the fact that all numerical computations are ultimately compositions of a finite set of elementary operations for which derivatives are known. Combining the derivatives of the constituent operations through the chain rule gives the derivative of the overall composition. This allows accurate evaluation of derivatives at machine precision with ideal asymptotic efficiency and only a small constant factor of overhead. In particular, to compute the derivatives involved in definition (2

) we rely on Tensorflow

abadi2016tensorflow which is a popular and relatively well documented open source software library for automatic differentiation and deep learning computations.

Parameters of the neural networks and can be learned by minimizing the sum of squared errors

(3)

where denote the training data on . The term tries to fit the data by adjusting the parameters of the neural network while the term learns the parameters of the network by trying to satisfy the partial differential equation (1) at the collocation points . Training the parameters of the neural networks and can be performed simultaneously by minimizing the sum of squared error (3) or in a sequential fashion by training first and second.

How can we make sure that the algorithm presented above results in an acceptable function ? One answer would be to solve the learned equations and compare the resulting solution to the solution of the exact partial differential equation. However, it should be pointed out that the learned function is a black-box function; i.e., we do not know its functional form. Consequently, none of the classical partial differential equation solvers such as finite differences, finite elements or spectral methods are applicable here. Therefore, to solve the learned equations we have no other choice than to resort to modern black-box solvers such as physic informed neural networks (PINNs) introduced in raissi2017physics_I. The steps involved in PINNs as solvers444 PINNs have also been used in raissi2017physics_II to address the problem of data-driven discovery of partial differential equations in cases where the nonlinear function is known up to a set of parameters. are similar to equations (1), (2), and (3) with the nonlinear function being known and the data residing on the boundary of the domain.

3 Results

The proposed framework provides a universal treatment of nonlinear partial differential equations of fundamentally different nature. This generality will be demonstrated by applying the algorithm to a wide range of canonical problems spanning a number of scientific domains including the Burgers’, Korteweg-de Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and Navier-Stokes equations. These examples are motivated by the pioneering work of Rudye1602614. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/DeepHPMs.

3.1 Burgers’ equation

Let us start with the Burgers’ equation arising in various areas of engineering and applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow basdevant1986spectral. In one space dimension the Burgers’ equation reads as

(4)

To obtain a set of training and test data we simulate the Burger’s equation (4) using conventional spectral methods. Specifically, starting from an initial condition and assuming periodic boundary conditions, we integrate equation (4) up to the final time . We use the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step size . The solution is saved every to give us a total of 201 snapshots. Out of this data-set, we generate a smaller training subset, scattered in space and time, by randomly sub-sampling data points from time to . We call the portion of the domain from time to the training portion. The rest of the domain from time to the final time will be referred to as the test portion. Using this terminology, we are in fact sub-sampling from the original dataset only in the training portion of the domain. Given the training data, we are interested in learning as a function of the solution and its derivatives up to the 2nd order555 A detailed study of the choice of the order will be provided later in this section.; i.e.,

(5)

We represent the solution

by a 5-layer deep neural network with 50 neurons per hidden layer. Furthermore, we let

to be a neural network with 2 hidden layers and 100 neurons per hidden layer. These two networks are trained by minimizing the sum of squared errors loss of equation (3). To illustrate the effectiveness of our approach, we solve the learned partial differential equation (5), along with periodic boundary conditions and the same initial condition as the one used to generate the original dataset, using the PINNs algorithm raissi2017physics_I. The original dataset alongside the resulting solution of the learned partial differential equation are depicted in figure 1. This figure indicates that our algorithm is able to accurately identify the underlying partial differential equation with a relative -error of 4.78e-03. It should be highlighted that the training data are collected in roughly two-thirds of the domain between times and . The algorithm is thus extrapolating from time onwards. The relative -error on the training portion of the domain is 3.89e-03.

Figure 1: Burgers equation: A solution to the Burger’s equation (left panel) is compared to the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the solution with a relative -error of 4.78e-03. It should be emphasized that the training data are collected in roughly two-thirds of the domain between times and represented by the white vertical lines. The algorithm is thus extrapolating from time onwards. The relative -error on the training portion of the domain is 3.89e-03.

Furthermore, we performed a systematic study of the reported results in figure 1

with respect to noise levels in the data by keeping the total number of training observations as well as the neural network architectures fixed to the settings described above. In particular, we added white noise with magnitude equal to one, two, and five percent of the standard deviation of the solution function. The results of this study are summarized in table

1

. The key observation here is that less noise in the data enhances the performance of the algorithm. Our experience so far indicates that the negative consequences of more noise in the data can be remedied to some extent by obtaining more data. Another fundamental point to make is that in general the choice of the neural network architectures, i.e., activation functions and number of layers/neurons, is crucial. However, in many cases of practical interest this choice still remains an art and systematic studies with respect to the neural network architectures often fail to reveal consistent patterns

raissi2017physics_I; raissi2017physics_II; raissi2018multistep

. We usually observe some variability and non monotonic trends in systematic studies pertaining to the network architectures. In this regard, there exist a series of open questions mandating further investigations. For instance, how common techniques such as batch normalization, drop out, and

/ regularization goodfellow2016deep could enhance the robustness of the proposed algorithm with respect to the neural network architectures as well as noise in the data.

Clean data 1% noise 2% noise 5% noise
Relative -error 4.78e-03 2.64e-02 1.09e-01 4.56e-01
Table 1: Burgers’ equation: Relative -error between solutions of the Burgers’ equation and the learned partial differential equation as a function of noise corruption level in the data. Here, the total number of training data as well as the neural network architectures are kept fixed.

To further scrutinize the performance of the algorithm, let us change the initial condition to and solve the Burgers’ equation (4) using a classical partial differential equation solver. In particular, the new data-set Rudye1602614 contains 101 time snapshots of a solution to the Burgers’ equation (4) with a Gaussian initial condition, propagating into a traveling wave. The snapshots are apart and stretch from time to . The spatial discretization of each snapshot involves a uniform grid with 256 cells. We compare the resulting solution to the one obtained by solving the learned partial differential equation (5) using the PINNs algorithm raissi2017physics_I. It is worth emphasizing that the algorithm is trained on the dataset depicted in figure 1 and is being tested on a totally different dataset as shown in figure 2. The surprising result reported in figure 2 is a strong indication that the algorithm is capable of accurately identifying the underlying partial differential equation. The algorithm hasn’t seen even a single observation of the dataset shown in figure 2 during model training and is yet capable of achieving a relatively accurate approximation of the true solution. The identified system reproduces the solution to the Burgers’ equation with a relative -error of 7.33e-02.

However, the aforementioned result seems to be sensitive to making the nonlinear function of equation (5) depend on either time or space , or both. For instance, if we look for equations of the form , the relative -error between the exact and the learned solutions corresponding to figure 2 increases to 4.25e-01. Similarly, if we look for equations of the form , the relative -error increases to 2.58e-01. Moreover, if we make the nonlinear function both time and space dependent the relative -error increases even further to 1.46e+00. A possible explanation for this behavior could be that some of the dynamics in the training data is now being explained by time or space . This makes the algorithm over-fit to the training data and consequently harder for it to generalize to datasets it has never seen before. Based on our experience more data seems to help resolve this issue. Another interesting observation is that if we train on the dataset presented in figure 2 and test on the one depicted in figure 1, i.e., swap the roles of these two datasets, the algorithm fails to generalize to the new test data. This observation suggests that the dynamics (see figure 1) generated by as the initial condition is a reacher one compared to the dynamics (see figure 2) generated by . This is also evident from a visual comparison of the datasets given in figures 1 and 2.

Figure 2: Burgers equation: A solution to the Burger’s equation (left panel) is compared to the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the solution with a relative -error of 7.33e-02. It should be highlighted that the algorithm is trained on a dataset totally different from the one used in this figure.

Let us now take a closer look at equation (5

) and ask: what would happen if we included derivatives of higher order than two in our formulation? As will be demonstrated in the following, the algorithm proposed in the current work is capable of handling such cases, however, this is a fundamental question worthy of a moment of reflection. The choice of the order of the partial differential equation (

5) determines the form and the number of boundary conditions needed to end up with a well-posed problem. For instance, in the one dimensional setting of equation (5) including a third order derivative requires three boundary conditions, namely , , and , assuming period boundary conditions. Consequently, in cases of practical interest, the available information on the boundary of the domain could help us determine the order of the partial differential equation we are try to identify. With this in mind, let us study the robustness of our framework with respect to the highest order of the derivatives included in equation (5). As for the boundary conditions, we use and when solving the identified partial differential equation regardless of the initial choice of its order. The results are summarized in table 2. The first column of table 2 demonstrates that a single first order derivative is clearly not enough to capture the second order dynamics of the Burgers’ equation. Moreover, the method seems to be generally robust with respect to the number and order of derivatives included in the nonlinear function . Therefore, in addition to any information residing on the domain boundary, studies such as table 2, albeit for training or validation datasets, could help us choose the best order for the underlying partial differential equation. In this case, table 2 suggests the order of the equation to be two.

In addition, it must be stated that including higher order derivatives comes at the cost of reducing the speed of the algorithm due to the growth in the complexity of the resulting computational graph for the corresponding deep hidden physics model (see equation (2)). Also, another drawback is that higher order derivatives are usually less accurate specially if one uses single precision floating-point system (float32) rather than double precision (float64). It is true that automatic differentiation enables us to evaluate derivatives at machine precision, however, for float32 the machine epsilon is approximately 1.19e-07. For improved performance in terms of speed of the algorithm and constrained by usual GPU (graphics processing unit) platforms we often end up using float32 which boils down to committing an error of approximately 1.19e-07 while computing the required derivatives. The aforementioned issues do not seem to be too serious since computer infrastructure (both hardware and software) for deep learning is constantly improving.

1st order 2nd order 3rd order 4th order
Relative -error 1.14e+00 1.29e-02 3.42e-02 5.54e-02
Table 2: Burgers’ equation: Relative -error between solutions of the Burgers’ equation and the learned partial differential equation as a function of the highest order of spatial derivatives included in our formulation. For instance, the case corresponding to the 3rd order means that we are looking for a nonlinear function such that . Here, the total number of training data as well as the neural network architectures are kept fixed and the data are assumed to be noiseless.

3.2 The KdV equation

As a mathematical model of waves on shallow water surfaces one could consider the Korteweg-de Vries (KdV) equation. The KdV equation reads as

(6)

To obtain a set of training data we simulate the KdV equation (6) using conventional spectral methods. In particular, we start from an initial condition and assume periodic boundary conditions. We integrate equation (6) up to the final time . We use the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 512 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step size . The solution is saved every to give us a total of 201 snapshots. Out of this data-set, we generate a smaller training subset, scattered in space and time, by randomly sub-sampling data points from time to . In other words, we are sub-sampling from the original dataset only in the training portion of the domain from time to . Given the training data, we are interested in learning as a function of the solution and its derivatives up to the 3rd order666 A detailed study of the choice of the order is provided in section 3.1 for the Burgers’ equation.; i.e.,

(7)

We represent the solution by a 5-layer deep neural network with 50 neurons per hidden layer. Furthermore, we let to be a neural network with 2 hidden layers and 100 neurons per hidden layer. These two networks are trained by minimizing the sum of squared errors loss of equation (3). To illustrate the effectiveness of our approach, we solve the learned partial differential equation (7) using the PINNs algorithm raissi2017physics_I. We assume periodic boundary conditions and the same initial condition as the one used to generate the original dataset. The resulting solution of the learned partial differential equation as well as the exact solution of the KdV equation are depicted in figure 3. This figure indicates that our algorithm is capable of accurately identifying the underlying partial differential equation with a relative -error of 6.28e-02. It should be highlighted that the training data are collected in roughly two-thirds of the domain between times and . The algorithm is thus extrapolating from time onwards. The corresponding relative -error on the training portion of the domain is 3.78e-02.

Figure 3: The KdV equation: A solution to the KdV equation (left panel) is compared to the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the solution with a relative -error of 6.28e-02. It should be emphasized that the training data are collected in roughly two-thirds of the domain between times and represented by the white vertical lines. The algorithm is thus extrapolating from time onwards. The relative -error on the training portion of the domain is 3.78e-02.

To test the algorithm even further, let us change the initial condition to and solve the KdV (6) using the conventional spectral method outlined above. We compare the resulting solution to the one obtained by solving the learned partial differential equation (5) using the PINNs algorithm raissi2017physics_I. It is worth emphasizing that the algorithm is trained on the dataset depicted in figure 3 and is being tested on a different dataset as shown in figure 4. The surprising result reported in figure 4 strongly indicates that the algorithm is accurately learning the underlying partial differential equation; i.e., the nonlinear function . The algorithm hasn’t seen the dataset shown in figure 4 during model training and is yet capable of achieving a relatively accurate approximation of the true solution. To be precise, the identified system reproduces the solution to the KdV equation with a relative -error of 3.44e-02.

Figure 4: The KdV equation: A solution to the KdV equation (left panel) is compared to the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the solution with a relative -error of 3.44e-02. It should be highlighted that the algorithm is trained on a dataset totally different from the one shown in this figure.

3.3 Kuramoto-Sivashinsky equation

As a canonical model of a pattern forming system with spatio-temporal chaotic behavior we consider the Kuramoto-Sivashinsky equation. In one space dimension the Kuramoto-Sivashinsky equation reads as

(8)

We generate a dataset containing a direct numerical solution of the Kuramoto-Sivashinsky (8) equation with 512 spatial points and 251 snapshots. To be precise, assuming periodic boundary conditions, we start from the initial condition and integrate equation (8) up to the final time . We employ the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 512 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step size . The snapshots are saved every . From this dataset, we create a smaller training subset, scattered in space and time, by randomly sub-sampling data points from time to the final time . Given the resulting training data, we are interested in learning as a function of the solution and its derivatives up to the 4rd order777 A detailed study of the choice of the order is provided in section 3.1 for the Burgers’ equation.; i.e.,

(9)

We let the solution to be represented by a 5-layer deep neural network with 50 neurons per hidden layer. Furthermore, we approximate the nonlinear function by a neural network with 2 hidden layers and 100 neurons per hidden layer. These two networks are trained by minimizing the sum of squared errors loss of equation (3). To demonstrate the effectiveness of our approach, we solve the learned partial differential equation (9) using the PINNs algorithm raissi2017physics_I. We assume the same initial and boundary conditions as the ones used to generate the original dataset. The resulting solution of the learned partial differential equation alongside the exact solution of the Kuramoto-Sivashinsky equation are depicted in figure 5. This figure indicates that our algorithm is capable of identifying the underlying partial differential equation with a relative -error of 7.63e-02.

Figure 5: Kuramoto-Sivashinsky equation: A solution to the Kuramoto-Sivashinsky equation (left panel) is compared to the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and reproduces the solution with a relative -error of 7.63e-02.

However, it must be mentioned that we are avoiding the regimes where the Kuramoto-Sivashinsky equation becomes chaotic. For instance, by changing the domain to and the initial condition to and by integrating the Kuramoto-Sivashinsky equation (8) up to the final time , one could end up with a relatively complicated solution as depicted in figure 6. This problem remains stubbornly unsolved in the face of the algorithm proposed in the current work as well as the PINNs framework introduced in raissi2017physics_I; raissi2017physics_II. It is not difficult for a plain vanilla neural network to approximate the function depicted in figure 6. However, as we compute the derivatives required in equation (2

), minimizing the loss function (

3) becomes a challenge. Understanding what makes this problem hard to solve should be at the core of future extensions of this line of research.

Figure 6: Kuramoto-Sivashinsky equation: A solution to the Kuramoto-Sivashinsky equation. This problem remains stubbornly unsolved in the face of the algorithm proposed in the current work.

3.4 Nonlinear Schrödinger equation

The one-dimensional nonlinear Schrödinger equation is a classical field equation that is used to study nonlinear wave propagation in optical fibers and/or waveguides, Bose-Einstein condensates, and plasma waves. This example aims to highlight the ability of our framework to handle complex-valued solutions as well as different types of nonlinearities in the governing partial differential equations. The nonlinear Schrödinger equation is given by

(10)

Let denote the real part of and the imaginary part. Then, the nonlinear Schrödinger equation can be equivalently written as a system of partial differential equations

(11)

In order to assess the performance of our method, we simulate equation (10) using conventional spectral methods to create a high-resolution data set. Specifically, starting from an initial state and assuming periodic boundary conditions and , we integrate equation (10) up to a final time using the Chebfun package driscoll2014chebfun. We are in fact using a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step . The solution is saved approximately every to give us a total of 501 snapshots. Out of this data-set, we generate a smaller training subset, scattered in space and time, by randomly sub-sampling data points from time up to the final time . Given the resulting training data, we are interested in learning two nonlinear functions and as functions of the solutions and their derivatives up to the 2nd order888 A detailed study of the choice of the order is provided in section 3.1 for the Burgers’ equation.; i.e.,

(12)

We represent the solutions and by two 5-layer deep neural networks with 50 neurons per hidden layer. Furthermore, we let and to be two neural networks with 2 hidden layers and 100 neurons per hidden layer. These four networks are trained by minimizing the sum of squared errors loss of equation (3). To illustrate the effectiveness of our approach, we solve the learned partial differential equation (12), along with periodic boundary conditions and the same initial condition as the one used to generate the original dataset, using the PINNs algorithm raissi2017physics_I. The original dataset (in absolute values, i.e., ) alongside the resulting solution (also in absolute values) of the learned partial differential equation are depicted in figure 7. This figure indicates that our algorithm is able to accurately identify the underlying partial differential equation with a relative -error of 6.28e-03.

Figure 7: Nonlinear Schrödinger equation: Absolute value of a solution to the nonlinear Schrödinger equation (left panel) is compared to the absolute value of the corresponding solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the absolute value of the solution with a relative -error of 6.28e-03.

3.5 Navier-Stokes equation

Let us consider the Navier-Stokes equation in two dimensions999 It is straightforward to generalize the proposed framework to the Navier-Stokes equation in three dimensions (3D). (2D) given explicitly by

(13)

where denotes the vorticity, the -component of the velocity field, and the -component. To generate a training dataset for this problem we follow the exact same instructions as the ones provided in kutz2016dynamic; Rudye1602614. Specifically, we simulate the Navier-Stokes equations describing the two-dimensional fluid flow past a circular cylinder at Reynolds number 100 using the Immersed Boundary Projection Method taira2007immersed; colonius2008fast. This approach utilizes a multi-domain scheme with four nested domains, each successive grid being twice as large as the previous one. Length and time are nondimensionalized so that the cylinder has unit diameter and the flow has unit velocity. Data is collected on the finest domain with dimensions at a grid resolution of . The flow solver uses a 3rd-order Runge Kutta integration scheme with a time step of t = 0.02, which has been verified to yield well-resolved and converged flow fields. After simulation converges to steady periodic vortex shedding, flow snapshots are saved every . We use a small portion of the resulting data-set for model training. In particular, we subsample data points, scattered in space and time, in the rectangular region (dashed red box) downstream of the cylinder as shown in figure 8.

Figure 8: Navier-Stokes equation: A snapshot of the vorticity field of a solution to the Navier-Stokes equations for the fluid flow past a cylinder. The dashed red box in this panel specifies the sampling region.

Given the training data, we are interested in learning as a function of the stream-wise and transverse velocity components in addition to the vorticity and its derivatives up to the 2nd order101010 A detailed study of the choice of the order is provided in this section 4 for the Burgers’ equation.; i.e.,

(14)

We represent the solution by a 5-layer deep neural network with 200 neurons per hidden layer. Furthermore, we let to be a neural network with 2 hidden layers and 100 neurons per hidden layer. These two networks are trained by minimizing the sum of squared errors loss

where denote the training data on and is given by

To illustrate the effectiveness of our approach, we solve the learned partial differential equation (14), in the region specified in figure 8 by the dashed red box, using the PINNs algorithm raissi2017physics_I. We use the exact solution to provide us with the required Dirichlet boundary conditions as well as the initial condition needed to solve the leaned partial differential equation (14). A randomly picked snapshot of the vorticity field in the original dataset alongside the corresponding snapshot of the solution of the learned partial differential equation are depicted in figure 9. This figure indicates that our algorithm is able to accurately identify the underlying partial differential equation with a relative -error of 5.79e-03 in space and time.

Figure 9: Navier-Stokes equation: A randomly picked snapshot of a solution to the Navier-Stokes equation (left panel) is compared to the corresponding snapshot of the solution of the learned partial differential equation (right panel). The identified system correctly captures the form of the dynamics and accurately reproduces the solution with a relative -error of 5.79e-03 in space and time.

4 Summary and Discussion

We have presented a deep learning approach for extracting nonlinear partial differential equations from spatio-temporal datasets. The proposed algorithm leverages recent developments in automatic differentiation to construct efficient algorithms for learning infinite dimensional dynamical systems using deep neural networks. In order to validate the performance of our approach we had no other choice than to rely on black-box solvers (see e.g., raissi2017physics_II). This signifies the importance of developing general purpose partial differential equation solvers. Developing these types of solvers is still in its infancy and more collaborative work is needed to bring them to the maturity level of conventional methods such as finite elements, finite differences, and spectral methods which have been around for more than half a century or so.

There exist a series of open questions mandating further investigations. For instance, many real-world partial differential equations depend on parameters and, when the parameters are varied, they may go through bifurcations (e.g., the Reynold number for the Navier-Stokes equations). Here, the goal would be to collect data from the underlying dynamics corresponding to various parameter values, and infer the parameterized partial differential equation. Another exciting avenue of future research would be to apply convolutional architectures goodfellow2016deep for mitigating the complexity associated with partial differential equations with very high-dimensional inputs. These types of equations appear routinely in dynamic programming, optimal control, or reinforcement learning. Moreover, a quick look at the list of nonlinear partial differential equations on Wikipedia reveals that many of these equations take the form specified in equation (1). However, a handful of them do not take this form, including the Boussinesq type equation . It would be interesting to extend the framework outlined in the current work to incorporate all such cases. In the end, it is not always clear what measurements of a dynamical system to take. Even if we did know, collecting these measurements might be prohibitively expensive. It is well-known that time-delay coordinates of a single variable can act as additional variables. It might be interesting to investigating this idea for the infinite dimensional setting of partial differential equations.

Acknowledgements

This work received support by the DARPA EQUiPS grant N66001-15-2-4055 and the AFOSR grant FA9550-17-1-0013. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/DeepHPMs.

References