Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations

11/28/2017 ∙ by Maziar Raissi, et al. ∙ 0

We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this second part of our two-part treatise, we focus on the problem of data-driven discovery of partial differential equations. Depending on whether the available data is scattered in space-time or arranged in fixed temporal snapshots, we introduce two main classes of algorithms, namely continuous time and discrete time models. The effectiveness of our approach is demonstrated using a wide range of benchmark problems in mathematical physics, including conservation laws, incompressible fluid flow, and the propagation of nonlinear shallow-water waves.

READ FULL TEXT VIEW PDF

Authors

page 5

page 14

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has gained unprecedented attention over the last few years, and deservedly so, as it has introduced transformative solutions across diverse scientific disciplines krizhevsky2012imagenet; lecun2015deep; lake2015human; alipanahi2015predicting

. Despite the ongoing success, there exist many scientific applications that have yet failed to benefit from this emerging technology, primarily due to the high cost of data acquisition. It is well known that the current state-of-the-art machine learning tools (e.g., deep/convolutional/recurrent neural networks) are lacking robustness and fail to provide any guarantees of convergence when operating in the

small data regime, i.e., the regime where very few training examples are available.

In the first part of this study, we introduced physics informed neural networks as a viable solution for training deep neural networks with few training examples, for cases where the available data is known to respect a given physical law described by a system of partial differential equations. Such cases are abundant in the study of physical, biological, and engineering systems, where longstanding developments of mathematical physics have shed tremendous insight on how such systems are structured, interact, and dynamically evolve in time. We saw how the knowledge of an underlying physical law can introduce structure that effectively regularizes the training of neural networks, and enables them to generalize well even when only a few training examples are available. Through the lens of different benchmark problems, we highlighted the key features of physics informed neural networks in the context of data-driven solutions of partial differential equations raissi2017numerical; raissi2017inferring.

In this second part of our study, we shift our attention to the problem of data-driven discovery of partial differential equations raissi2017hidden; raissi2017machine; Rudye1602614. To this end, let us consider parametrized and nonlinear partial differential equations of the general form

(1)

where denotes the latent (hidden) solution, is a nonlinear operator parametrized by , and is a subset of . This setup encapsulates a wide range of problems in mathematical physics including conservation laws, diffusion processes, advection-diffusion-reaction systems, and kinetic equations. As a motivating example, the one dimensional Burgers’ equation basdevant1986spectral corresponds to the case where and . Here, the subscripts denote partial differentiation in either time or space. Now, the problem of data-driven discovery of partial differential equations poses the following question: given a small set of scattered and potentially noisy observations of the hidden state of a system, what are the parameters that best describe the observed data?

In what follows, we will provide an overview of our two main approaches to tackle this problem, namely continuous time and discrete time models, as well as a series of results and systematic studies for a diverse collection of benchmarks. In the first approach, we will assume availability of scattered and potential noisy measurements across the entire spatio-temporal domain. In the latter, we will try to infer the unknown parameters from only two data snapshots taken at distinct time instants. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/PINNs.

2 Continuous Time Models

We define to be given by the left-hand-side of equation (1); i.e.,

(2)

and proceed by approximating by a deep neural network. This assumption along with equation (2) result in a physics informed neural network

. This network can be derived by applying the chain rule for differentiating compositions of functions using automatic differentiation

baydin2015automatic. It is worth highlighting that the parameters of the differential operator turn into parameters of the physics informed neural network .

2.1 Example (Burgers’ Equation)

As a first example, let us consider the Burgers’ equation. This equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow basdevant1986spectral. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term. For small values of the viscosity parameters, Burgers’ equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension the equation reads as

(3)

Let us define to be given by

(4)

and proceed by approximating by a deep neural network. This will result in the physics informed neural network . The shared parameters of the neural networks and along with the parameters of the differential operator can be learned by minimizing the mean squared error loss

(5)

where

and

Here, denote the training data on . The loss corresponds to the training data on while enforces the structure imposed by equation (3) at a finite set of collocation points, whose number and location is taken to be the same as the training data.

To illustrate the effectiveness of our approach, we have created a training data-set by randomly generating points across the entire spatio-temporal domain from the exact solution corresponding to and . The locations of the training points are illustrated in the top panel of figure 1

. This data-set is then used to train a 9-layer deep neural network with 20 neurons per hidden layer by minimizing the mean square error loss of (

5) using the L-BFGS optimizer liu1989limited. Upon training, the network is calibrated to predict the entire solution , as well as the unknown parameters that define the underlying dynamics. A visual assessment of the predictive accuracy of the physics informed neural network is given in the middle and bottom panels of figure 1. The network is able to identify the underlying partial differential equation with remarkable accuracy, even in the case where the scattered training data is corrupted with 1% uncorrelated noise.

Figure 1: Burgers equation: Top: Predicted solution along with the training data. Middle: Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. Bottom: Correct partial differential equation along with the identified one obtained by learning and .

To further scrutinize the performance of our algorithm, we have performed a systematic study with respect to the total number of training data, the noise corruption levels, and the neural network architecture. The results are summarized in tables 1 and 2. The key observation here is that the proposed methodology appears to be very robust with respect to noise levels in the data, and yields a reasonable identification accuracy even for noise corruption up to 10%. This enhanced robustness seems to greatly outperform competing approaches using Gaussian process regression as previously reported in raissi2017hidden as well as approaches relying on sparse regression that require relatively clean data for accurately computing numerical gradients brunton2016discovering.

% error in % error in
noise 0% 1% 5% 10% 0% 1% 5% 10%
500 0.131 0.518 0.118 1.319 13.885 0.483 1.708 4.058
1000 0.186 0.533 0.157 1.869 3.719 8.262 3.481 14.544
1500 0.432 0.033 0.706 0.725 3.093 1.423 0.502 3.156
2000 0.096 0.039 0.190 0.101 0.469 0.008 6.216 6.391
Table 1: Burgers’ equation: Percentage error in the identified parameters and for different number of training data corrupted by different noise levels. Here, the neural network architecture is kept fixed to 9 layers and 20 neurons per layer.
% error in % error in
LayersNeurons 10 20 40 10 20 40
2
4
6
8
Table 2: Burgers’ equation: Percentage error in the identified parameters and for different number of hidden layers and neurons per layer. Here, the training data is considered to be noise-free and fixed to .

2.1.1 Example (Navier-Stokes Equation)

Our next example involves a realistic scenario of incompressible fluid flow as described by the ubiquitous Navier-Stokes equations. Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions111It is straightforward to generalize the proposed framework to the Navier-Stokes equations in three dimensions (3D). (2D) given explicitly by

(6)

where denotes the -component of the velocity field, the -component, and the pressure. Here, are the unknown parameters. Solutions to the Navier-Stokes equations are searched in the set of divergence-free functions; i.e.,

(7)

This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid. We make the assumption that

(8)

for some latent function .222

This construction can be generalized to three dimensional problems by employing the notion of vector potentials.

Under this assumption, the continuity equation (7) will be automatically satisfied. Given noisy measurements

of the velocity field, we are interested in learning the parameters as well as the pressure . We define and to be given by

(9)

and proceed by jointly approximating using a single neural network with two outputs. This prior assumption along with equations (8) and (9) results into a physics informed neural network . The parameters of the Navier-Stokes operator as well as the parameters of the neural networks and can be trained by minimizing the mean squared error loss

Here we consider the prototype problem of incompressible flow past a circular cylinder; a problem known to exhibit rich dynamic behavior and transitions for different regimes of the Reynolds number . Assuming a non-dimensional free stream velocity , cylinder diameter , and kinematic viscosity , the system exhibits a periodic steady state behavior characterized by a asymmetrical vortex shedding pattern in the cylinder wake, known as the Kármán vortex street von1963aerodynamics.

To generate a high-resolution data set for this problem we have employed the spectral/hp-element solver NekTar karniadakis2013spectral. Specifically, the solution domain is discretized in space by a tessellation consisting of 412 triangular elements, and within each element the solution is approximated as a linear combination of a tenth-order hierarchical, semi-orthogonal Jacobi polynomial expansion karniadakis2013spectral. We have assumed a uniform free stream velocity profile imposed at the left boundary, a zero pressure outflow condition imposed at the right boundary located 25 diameters downstream of the cylinder, and periodicity for the top and bottom boundaries of the domain. We integrate equation (6) using a third-order stiffly stable scheme karniadakis2013spectral until the system reaches a periodic steady state, as depicted in figure 2(a). In what follows, a small portion of the resulting data-set corresponding to this steady state solution will be used for model training, while the remaining data will be used to validate our predictions. For simplicity, we have chosen to confine our sampling in a rectangular region downstream of cylinder as shown in figure 2(a).

Given scattered and potentially noisy data on the stream-wise and transverse velocity components, our goal is to identify the unknown parameters and , as well as to obtain a qualitatively accurate reconstruction of the entire pressure field in the cylinder wake, which by definition can only be identified up to a constant. To this end, we have created a training data-set by randomly sub-sampling the full high-resolution data-set. To highlight the ability of our method to learn from scattered and scarce training data, we have chosen , corresponding to a mere 1% of the total available data as illustrated in figure 2(b). Also plotted are representative snapshots of the predicted velocity components and after the model was trained. The neural network architecture used here consists of 9 layers with 20 neurons in each layer.

Figure 2: Navier-Stokes equation: Top: Incompressible flow and dynamic vortex shedding past a circular cylinder at . The spatio-temporal training data correspond to the depicted rectangular region in the cylinder wake. Bottom: Locations of training data-points for the the stream-wise and transverse velocity components, and , respectively.

A summary of our results for this example is presented in figure 3. We observe that the physics informed neural network is able to correctly identify the unknown parameters and

with very high accuracy even when the training data was corrupted with noise. Specifically, for the case of noise-free training data, the error in estimating

and is 0.078%, and 4.67%, respectively. The predictions remain robust even when the training data are corrupted with 1% uncorrelated Gaussian noise, returning an error of 0.17%, and 5.70%, for and , respectively.

A more intriguing result stems from the network’s ability to provide a qualitatively accurate prediction of the entire pressure field in the absence of any training data on the pressure itself. A visual comparison against the exact pressure solution is presented in figure 3 for a representative pressure snapshot. Notice that the difference in magnitude between the exact and the predicted pressure is justified by the very nature of the Navier-Stokes system, as the pressure field is only identifiable up to a constant. This result of inferring a continuous quantity of interest from auxiliary measurements by leveraging the underlying physics is a great example of the enhanced capabilities that physics informed neural networks have to offer, and highlights their potential in solving high-dimensional inverse problems.

Figure 3: Navier-Stokes equation: Top: Predicted versus exact instantaneous pressure field at a representative time instant. By definition, the pressure can be recovered up to a constant, hence justifying the different magnitude between the two plots. This remarkable qualitative agreement highlights the ability of physics-informed neural networks to identify the entire pressure field, despite the fact that no data on the pressure are used during model training. Bottom: Correct partial differential equation along with the identified one obtained by learning and .

Our approach so far assumes availability of scattered data throughout the entire spatio-temporal domain. However, in many cases of practical interest, one may only be able to observe the system at distinct time instants. In the next section, we introduce a different approach that tackles the data-driven discovery problem using only two data snapshots. We will see how, by leveraging the classical Runge-Kutta time-stepping schemes, one can construct discrete time physics informed neural networks that can retain high predictive accuracy even when the temporal gap between the data snapshots is very large.

3 Discrete Time Models

We begin by applying the general form of Runge-Kutta methods with stages to equation (1) and obtain

(11)

Here, for . This general form encapsulates both implicit and explicit time-stepping schemes, depending on the choice of the parameters . Equations (11) can be equivalently expressed as

(12)

where

(13)

We proceed by placing a multi-output neural network prior on

(14)

This prior assumption along with equations (13) result in two physics informed neural networks

(15)

and

(16)

Given noisy measurements at two distinct temporal snapshots and of the system at times and , respectively, the shared parameters of the neural networks (14), (15), and (16) along with the parameters of the differential operator can be trained by minimizing the sum of squared errors

(17)

where

and

Here, , , , and .

3.1 Example (Burgers’ Equation)

Let us illustrate the key features of this method through the lens of the Burgers’ equation. Recall the equation’s form

(18)

and notice that the nonlinear operator in equation (13) is given by

Given merely two training data snapshots, the shared parameters of the neural networks (14), (15), and (16) along with the parameters of the Burgers’ equation can be learned by minimizing the sum of squared errors in equation (17). Here, we have created a training data-set comprising of and spatial points by randomly sampling the exact solution at time instants and , respectively. The training data are shown in the top and middle panel of figure 4. The neural network architecture used here consists of 4 hidden layers with 50 neurons each, while the number of Runge-Kutta stages is empirically chosen to yield a temporal error accumulation of the order of machine precision by setting333This is motivated by the theoretical error estimates for implicit Runge-Kutta schemes suggesting a truncation error of iserles2009first.

(19)

where the time-step for this example is . The bottom panel of figure 4 summarizes the identified parameters for the cases of noise-free data, as well as noisy data with 1% of Gaussian uncorrelated noise corruption. For both cases, the proposed algorithm is able to learn the correct parameter values and with remarkable accuracy, despite the fact that the two data snapshots used for training are very far apart, and potentially describe different regimes of the underlying dynamics.

Figure 4: Burgers equation: Top: Solution along with the temporal locations of the two training snapshots. Middle: Training data and exact solution corresponding to the two temporal snapshots depicted by the dashed vertical lines in the top panel. Bottom: Correct partial differential equation along with the identified one obtained by learning .

A sensitivity analysis is performed to quantify the accuracy of our predictions with respect to the gap between the training snapshots , the noise levels in the training data, and the physics informed neural network architecture. As shown in table 3, the proposed algorithm is quite robust to both and the noise corruption levels, and it consistently returns reasonable estimates for the unknown parameters. This robustness is mainly attributed to the flexibility of the underlying implicit Runge-Kutta scheme to admit an arbitrarily high number of stages, allowing the data snapshots to be very far apart in time, while not compromising the accuracy with which the nonlinear dynamics of equation (18) are resolved. This is the key highlight of our discrete time formulation for identification problems, setting it apart from competing approaches raissi2017hidden; brunton2016discovering. Lastly, table 4 presents the percentage error in the identified parameters, demonstrating the robustness of our estimates with respect to the underlying neural network architecture.

% error in % error in
noise 0% 1% 5% 10% 0% 1% 5% 10%
0.2
0.4
0.6
0.8
Table 3: Burgers’ equation: Percentage error in the identified parameters and for different gap size between two different snapshots and for different noise levels.
% error in % error in
LayersNeurons 10 25 50 10 25 50
1
2
3
4
Table 4: Burgers’ equation: Percentage error in the identified parameters and for different number of hidden layers and neurons in each layer.

3.1.1 Example (Korteweg–de Vries Equation)

Our final example aims to highlight the ability of the proposed framework to handle governing partial differential equations involving higher order derivatives. Here, we consider a mathematical model of waves on shallow water surfaces; the Korteweg-de Vries (KdV) equation. This equation can also be viewed as Burgers’ equation with an added dispersive term. The KdV equation has several connections to physical problems. It describes the evolution of long one-dimensional waves in many physical settings. Such physical settings include shallow-water waves with weakly non-linear restoring forces, long internal waves in a density-stratified ocean, ion acoustic waves in a plasma, and acoustic waves on a crystal lattice. Moreover, the KdV equation is the governing equation of the string in the Fermi-Pasta-Ulam problem dauxois2008fermi in the continuum limit. The KdV equation reads as

(20)

with being the unknown parameters. For the KdV equation, the nonlinear operator in equations (13) is given by

and the shared parameters of the neural networks (14), (15), and (16) along with the parameters of the KdV equation can be learned by minimizing the sum of squared errors (17).

To obtain a set of training and test data we simulated (20) using conventional spectral methods. Specifically, starting from an initial condition and assuming periodic boundary conditions, we have integrated equation (20) up to a final time using the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 512 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step . Using this data-set, we then extract two solution snapshots at time and , and randomly sub-sample them using and to generate a training data-set. We then use these data to train a discrete time physics informed neural network by minimizing the sum of squared error loss of equation (17) using L-BFGS liu1989limited. The network architecture used here comprises of 4 hidden layers, 50 neurons per layer, and an output layer predicting the solution at the Runge-Kutta stages, i.e., , , where is computed using equation (19) by setting .

The results of this experiment are summarized in figure 5. In the top panel, we present the exact solution , along with the locations of the two data snapshots used for training. A more detailed overview of the exact solution and the training data is given in the middle panel. It is worth noticing how the complex nonlinear dynamics of equation (20) causes dramatic differences in the form of the solution between the two reported snapshots. Despite these differences, and the large temporal gap between the two training snapshots, our method is able to correctly identify the unknown parameters regardless of whether the training data is corrupted with noise or not. Specifically, for the case of noise-free training data, the error in estimating and is 0.023%, and 0.006%, respectively, while the case with 1% noise in the training data returns errors of 0.057%, and 0.017%, respectively.

Figure 5: KdV equation: Top: Solution along with the temporal locations of the two training snapshots. Middle: Training data and exact solution corresponding to the two temporal snapshots depicted by the dashed vertical lines in the top panel. Bottom: Correct partial differential equation along with the identified one obtained by learning .

4 Summary and Discussion

We have introduced physics informed neural networks, a new class of universal function approximators that is capable of encoding any underlying physical laws that govern a given data-set, and can be described by partial differential equations. In this work, we design data-driven algorithms for discovering dynamic models described by parametrized nonlinear partial differential equations. The inferred models allow us to construct computationally efficient and fully differentiable surrogates that can be subsequently used for different applications including predictive forecasting, control, and optimization.

Although a series of promising results was presented, the reader may perhaps agree that this two-part treatise creates more questions than it answers. In a broader context, and along the way of seeking further understanding of such tools, we believe that this work advocates a fruitful synergy between machine learning and classical computational physics that has the potential to enrich both fields and lead to high-impact developments.

Acknowledgements

This work received support by the DARPA EQUiPS grant N66001-15-2-4055, the MURI/ARO grant W911NF-15-1-0562, and the AFOSR grant FA9550-17-1-0013. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/PINNs.

References