Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations

11/28/2017
by   Maziar Raissi, et al.
0

We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In this first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 15

page 19

11/28/2017

Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations

We introduce physics informed neural networks -- neural networks that ar...
06/16/2021

Towards Optimally Weighted Physics-Informed Neural Networks in Ocean Modelling

The carbon pump of the world's ocean plays a vital role in the biosphere...
08/30/2021

Normalizing field flows: Solving forward and inverse stochastic differential equations using physics-informed flow models

We introduce in this work the normalizing field flows (NFF) for learning...
08/30/2021

Wasserstein Generative Adversarial Uncertainty Quantification in Physics-Informed Neural Networks

In this paper, we study a physics-informed algorithm for Wasserstein Gen...
12/02/2020

Meshless physics-informed deep learning method for three-dimensional solid mechanics

Deep learning and the collocation method are merged and used to solve pa...
10/26/2021

An extended physics informed neural network for preliminary analysis of parametric optimal control problems

In this work we propose an extension of physics informed supervised lear...
09/08/2020

Large-scale Neural Solvers for Partial Differential Equations

Solving partial differential equations (PDE) is an indispensable part of...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the explosive growth of available data and computing resources, recent advances in machine learning and data analytics have yielded transformative results across diverse scientific disciplines, including image recognition krizhevsky2012imagenet

, natural language processing

lecun2015deep, cognitive science lake2015human, and genomics alipanahi2015predicting. However, more often than not, in the course of analyzing complex physical, biological or engineering systems, the cost of data acquisition is prohibitive, and we are inevitably faced with the challenge of drawing conclusions and making decisions under partial information. In this small data

regime, the vast majority of state-of-the art machine learning techniques (e.g., deep/convolutional/recurrent neural networks) are lacking robustness and fail to provide any guarantees of convergence.


At first sight, the task of training a deep learning algorithm to accurately identify a nonlinear map from a few – potentially very high-dimensional – input and output data pairs seems at best naive. Coming to our rescue, for many cases pertaining to the modeling of physical and biological systems, there a exist a vast amount of prior knowledge that is currently not being utilized in modern machine learning practice. Let it be the principled physical laws that govern the time-dependent dynamics of a system, or some empirical validated rules or other domain expertise, this prior information can act as a regularization agent that constrains the space of admissible solutions to a manageable size (for e.g., in incompressible fluid dynamics problems by discarding any non realistic flow solutions that violate the conservation of mass principle). In return, encoding such structured information into a learning algorithm results in amplifying the information content of the data that the algorithm sees, enabling it to quickly steer itself towards the right solution and generalize well even when only a few training examples are available.

The first glimpses of promise for exploiting structured prior information to construct data-efficient and physics-informed learning machines have already been showcased in the recent studies of raissi2017inferring; raissi2017machine; owhadi2015bayesian. There, the authors employed Gaussian process regression Rasmussen06gaussianprocesses

to devise functional representations that are tailored to a given linear operator, and were able to accurately infer solutions and provide uncertainty estimates for several prototype problems in mathematical physics. Extensions to nonlinear problems were proposed in subsequent studies by Raissi

et. al. raissi2017numerical; raissi2017hidden in the context of both inference and systems identification. Despite the flexibility and mathematical elegance of Gaussian processes in encoding prior information, the treatment of nonlinear problems introduces two important limitations. First, in raissi2017numerical; raissi2017hidden the authors had to locally linearize any nonlinear terms in time, thus limiting the applicability of the proposed methods to discrete-time domains and compromising the accuracy of their predictions in strongly nonlinear regimes. Secondly, the Bayesian nature of Gaussian process regression requires certain prior assumptions that may limit the representation capacity of the model and give rise to robustness/brittleness issues, especially for nonlinear problems owhadi2015brittleness.

1.1 Problem setup and summary of contributions

In this work we take a different approach by employing deep neural networks and leverage their well known capability as universal function approximators hornik1989multilayer. In this setting, we can directly tackle nonlinear problems without the need for committing to any prior assumptions, linearization, or local time-stepping. We exploit recent developments in automatic differentiation baydin2015automatic – one of the most useful but perhaps underused techniques in scientific computing – to differentiate neural networks with respect to their input coordinates and model parameters to obtain physics informed neural networks. Such neural networks are constrained to respect any symmetry, invariance, or conservation principles originating from the physical laws that govern the observed data, as modeled by general time-dependent and nonlinear partial differential equations. This simple yet powerful construction allows us to tackle a wide range of problems in computational science and introduces a potentially disruptive technology leading to the development of new data-efficient and physics-informed learning machines, new classes of numerical solvers for partial differential equations, as well as new data-driven approaches for model inversion and systems identification.

The general aim of this work is to set the foundations for a new paradigm in modeling and computation that enriches deep learning with the longstanding developments in mathematical physics. These developments are presented in the context of two main problem classes: data-driven solution and data-driven discovery of partial differential equations. To this end, let us consider parametrized and nonlinear partial differential equations of the general form

where denotes the latent (hidden) solution and is a nonlinear operator parametrized by . This setup encapsulates a wide range of problems in mathematical physics including conservation laws, diffusion processes, advection-diffusion-reaction systems, and kinetic equations. As a motivating example, the one dimensional Burgers’ equation basdevant1986spectral corresponds to the case where and . Here, the subscripts denote partial differentiation in either time or space. Given noisy measurements of the system, we are interested in the solution of two distinct problems. The first problem is that of predictive inference, filtering and smoothing, or data driven solutions of partial differential equations raissi2017numerical; raissi2017inferring which states: given fixed model parameters what can be said about the unknown hidden state of the system? The second problem is that of learning, system identification, or data-driven discovery of partial differential equations raissi2017hidden; raissi2017machine; Rudye1602614 stating: what are the parameters that best describe the observed data?

In this first part of our two-part treatise, we focus on computing data-driven solutions to partial differential equations of the general form

(1)

where denotes the latent (hidden) solution, is a nonlinear differential operator, and is a subset of . In what follows, we put forth two distinct classes of algorithms, namely continuous and discrete time models, and highlight their properties and performance through the lens of different benchmark problems. All code and data-sets accompanying this manuscript are available at https://github.com/maziarraissi/PINNs.

2 Continuous Time Models

We define to be given by the left-hand-side of equation (1); i.e.,

(2)

and proceed by approximating by a deep neural network. This assumption along with equation (2) result in a physics informed neural network

. This network can be derived by applying the chain rule for differentiating compositions of functions using automatic differentiation

baydin2015automatic.

2.1 Example (Burgers’ Equation)

As an example, let us consider the Burgers’ equation. This equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow basdevant1986spectral. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term. For small values of the viscosity parameters, Burgers’ equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension, the Burger’s equation along with Dirichlet boundary conditions reads as

(3)

Let us define to be given by

and proceed by approximating

by a deep neural network. To highlight the simplicity in implementing this idea we have included a Python code snippet using Tensorflow

abadi2016tensorflow; currently one of the most popular and well documented open source libraries for machine learning computations. To this end, can be simply defined as

def u(t, x):
    u = neural_net(tf.concat([t,x],1), weights, biases)
    return u

Correspondingly, the physics informed neural network takes the form

def f(t, x):
    u = u(t, x)
    u_t = tf.gradients(u, t)[0]
    u_x = tf.gradients(u, x)[0]
    u_xx = tf.gradients(u_x, x)[0]
    f = u_t + u*u_x - (0.01/tf.pi)*u_xx
    return f

The shared parameters between the neural networks and can be learned by minimizing the mean squared error loss

(4)

where

and

Here, denote the initial and boundary training data on and specify the collocations points for . The loss corresponds to the initial and boundary data while enforces the structure imposed by equation (3) at a finite set of collocation points.

In all benchmarks considered in this work, the total number of training data

is relatively small (a few hundred up to a few thousand points), and we chose to optimize all loss functions using L-BFGS; a quasi-Newton, full-batch gradient-based optimization algorithm

liu1989limited

. For larger data-sets a more computationally efficient mini-batch setting can be readily employed using stochastic gradient descent and its modern variants

goodfellow2016deep; kingma2014adam. Despite the fact that there is no theoretical guarantee that this procedure converges to a global minimum, our empirical evidence indicates that, if the given partial differential equation is well-posed and its solution is unique, our method is capable of achieving good prediction accuracy given a sufficiently expressive neural network architecture and a sufficient number of collocation points . This general observation deeply relates to the resulting optimization landscape induced by the mean square error loss of equation 4, and defines an open question for research that is in sync with recent theoretical developments in deep learning choromanska2015loss; shwartz2017opening. Here, we will test the robustness of the proposed methodology using a series of systematic sensitivity studies that accompany the numerical results presented in the following.

Figure 1 summarizes our results for the data-driven solution of the Burgers equation. Specifically, given a set of randomly distributed initial and boundary data, we learn the latent solution by training all parameters of a 9-layer deep neural network using the mean squared error loss of (4). Each hidden layer contained neurons and a hyperbolic tangent activation function. In general, the neural network should be given sufficient approximation capacity in order to accommodate the anticipated complexity of . However, in this example, our choice aims to highlight the robustness of the proposed method with respect to the well known issue of over-fitting. Specifically, the term in in equation (4) acts as a regularization mechanism that penalizes solutions that do not satisfy equation (3). Therefore, a key property of physics informed neural networks is that they can be effectively trained using small data sets; a setting often encountered in the study of physical systems for which the cost of data acquisition may be prohibitive.

The top panel of Figure 1 shows the predicted spatio-temporal solution , along with the locations of the initial and boundary training data. We must underline that, unlike any classical numerical method for solving partial differential equations, this prediction is obtained without any sort of discretization of the spatio-temporal domain. The exact solution for this problem is analytically available basdevant1986spectral, and the resulting prediction error is measured at in the relative -norm. Note that this error is about two orders of magnitude lower than the one reported in our previous work on data-driven solution of partial differential equation using Gaussian processes raissi2017numerical. A more detailed assessment of the predicted solution is presented in the bottom panel of figure 1. In particular, we present a comparison between the exact and the predicted solutions at different time instants . Using only a handful of initial and boundary data, the physics informed neural network can accurately capture the intricate nonlinear behavior of the Burgers’ equation that leads to the development of a sharp internal layer around . The latter is notoriously hard to accurately resolve with classical numerical methods and requires a laborious spatio-temporal discretization of equation (3).

Figure 1: Burgers’ equation: Top: Predicted solution along with the initial and boundary training data. In addition we are using 10,000 collocation points generated using a Latin Hypercube Sampling strategy. Bottom: Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the white vertical lines in the top panel. The relative error for this case is . Model training took approximately 60 seconds on a single NVIDIA Titan X GPU card.

To further analyze the performance of our method, we have performed the following systematic studies to quantify its predictive accuracy for different number of training and collocation points, as well as for different neural network architectures. In table 1 we report the resulting relative error for different number of initial and boundary training data and different number of collocation points , while keeping the 9-layer network architecture fixed. The general trend shows increased prediction accuracy as the total number of training data is increased, given a sufficient number of collocation points . This observation highlights a key strength of physics informed neural networks: by encoding the structure of the underlying physical law through the collocation points , one can obtain a more accurate and data-efficient learning algorithm.111Note that the case corresponds to a standard neural network model, i.e., a neural network that does not take into account the underlying governing equation. Finally, table 2 shows the resulting relative for different number of hidden layers, and different number of neurons per layer, while the total number of training and collocation points is kept fixed to and , respectively. As expected, we observe that as the number of layers and neurons is increased (hence the capacity of the neural network to approximate more complex functions), the predictive accuracy is increased.

2000 4000 6000 7000 8000 10000
20 2.9e-01 4.4e-01 8.9e-01 1.2e+00 9.9e-02 4.2e-02
40 6.5e-02 1.1e-02 5.0e-01 9.6e-03 4.6e-01 7.5e-02
60 3.6e-01 1.2e-02 1.7e-01 5.9e-03 1.9e-03 8.2e-03
80 5.5e-03 1.0e-03 3.2e-03 7.8e-03 4.9e-02 4.5e-03
100 6.6e-02 2.7e-01 7.2e-03 6.8e-04 2.2e-03 6.7e-04
200 1.5e-01 2.3e-03 8.2e-04 8.9e-04 6.1e-04 4.9e-04
Table 1: Burgers’ equation: Relative error between the predicted and the exact solution for different number of initial and boundary training data , and different number of collocation points . Here, the network architecture is fixed to 9 layers with 20 neurons per hidden layer.
LayersNeurons 10 20 40
2 7.4e-02 5.3e-02 1.0e-01
4 3.0e-03 9.4e-04 6.4e-04
6 9.6e-03 1.3e-03 6.1e-04
8 2.5e-03 9.6e-04 5.6e-04
Table 2: Burgers’ equation: Relative error between the predicted and the exact solution for different number of hidden layers and different number of neurons per layer. Here, the total number of training and collocation points is fixed to and , respectively.

2.2 Example (Shrödinger Equation)

This example aims to highlight the ability of our method to handle periodic boundary conditions, complex-valued solutions, as well as different types of nonlinearities in the governing partial differential equations. The one-dimensional nonlinear Schrödinger equation is a classical field equation that is used to study quantum mechanical systems, including nonlinear wave propagation in optical fibers and/or waveguides, Bose-Einstein condensates, and plasma waves. In optics, the nonlinear term arises from the intensity dependent index of refraction of a given material. Similarly, the nonlinear term for Bose-Einstein condensates is a result of the mean-field interactions of an interacting, N-body system. The nonlinear Schrödinger equation along with periodic boundary conditions is given by

(5)

where is the complex-valued solution. Let us define to be given by

and proceed by placing a complex-valued neural network prior on . In fact, if denotes the real part of and is the imaginary part, we are placing a multi-out neural network prior on . This will result in the complex-valued (multi-output) physic informed neural network . The shared parameters of the neural networks and can be learned by minimizing the mean squared error loss

(6)

where

and

Here, denotes the initial data, corresponds to the collocation points on the boundary, and represents the collocation points on . Consequently, corresponds to the loss on the initial data, enforces the periodic boundary conditions, and penalizes the Schrödinger equation not being satisfied on the collocation points.

In order to assess the accuracy of our method, we have simulated equation (5) using conventional spectral methods to create a high-resolution data set. Specifically, starting from an initial state and assuming periodic boundary conditions and , we have integrated equation (5) up to a final time using the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step . Under our data-driven setting, all we observe are measurements of the latent function at time . In particular, the training set consists of a total of data points on randomly parsed from the full high-resolution data-set, as well as randomly sampled collocation points for enforcing the periodic boundaries. Moreover, we have assumed randomly sampled collocation points used to enforce equation (5) inside the solution domain. All randomly sampled point locations were generated using a space filling Latin Hypercube Sampling strategy stein1987large.

Here our goal is to infer the entire spatio-temporal solution of the Schrödinger equation (5). We chose to jointly represent the latent function using a 5-layer deep neural network with neurons per layer and a hyperbolic tangent activation function. Figure 2 summarizes the results of our experiment. Specifically, the top panel of figure 2 shows the magnitude of the predicted spatio-temporal solution , along with the locations of the initial and boundary training data. The resulting prediction error is validated against the test data for this problem, and is measured at in the relative -norm. A more detailed assessment of the predicted solution is presented in the bottom panel of Figure 2. In particular, we present a comparison between the exact and the predicted solutions at different time instants . Using only a handful of initial data, the physics informed neural network can accurately capture the intricate nonlinear behavior of the Schrödinger equation.

Figure 2: Shrödinger equation: Top: Predicted solution along with the initial and boundary training data. In addition we are using 20,000 collocation points generated using a Latin Hypercube Sampling strategy. Bottom: Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. The relative error for this case is .

One potential limitation of the continuous time neural network models considered so far, stems from the need to use a large number of collocation points in order to enforce physics informed constraints in the entire spatio-temporal domain. Although this poses no significant issues for problems in one or two spatial dimensions, it may introduce a severe bottleneck in higher dimensional problems, as the total number of collocation points needed to globally enforce a physics informed constrain (i.e., in our case a partial differential equation) will increase exponentially. In the next section, we put forth a different approach that circumvents the need for collocation points by introducing a more structured neural network representation leveraging the classical Runge-Kutta time-stepping schemes iserles2009first.

3 Discrete Time Models

Let us apply the general form of Runge-Kutta methods with stages iserles2009first to equation (1) and obtain

(7)

Here, for . This general form encapsulates both implicit and explicit time-stepping schemes, depending on the choice of the parameters . Equations (7) can be equivalently expressed as

(8)

where

(9)

We proceed by placing a multi-output neural network prior on

(10)

This prior assumption along with equations (9) result in a physics informed neural network that takes as an input and outputs

(11)

3.1 Example (Burgers’ Equation)

To highlight the key features of the discrete time representation we revisit the problem of data-driven solution of the Burgers’ equation. For this case, the nonlinear operator in equation (9) is given by

and the shared parameters of the neural networks (10) and (11) can be learned by minimizing the sum of squared errors

(12)

where

and

Here, corresponds to the data at time . The Runge-Kutta scheme now allows us to infer the latent solution in a sequential fashion. Starting from initial data at time and data at the domain boundaries and , we can use the aforementioned loss function (12) to train the networks of (10), (11), and predict the solution at time . A Runge-Kutta time-stepping scheme would then use this prediction as initial data for the next step and proceed to train again and predict , , etc., one step at a time.

In classical numerical analysis, these steps are usually confined to be small due to stability constraints for explicit schemes or computational complexity constrains for implicit formulations iserles2009first. These constraints become more severe as the total number of Runge-Kutta stages is increased, and, for most problems of practical interest, one needs to take thousands to millions of such steps until the solution is resolved up to a desired final time. In sharp contrast to classical methods, here we can employ implicit Runge-Kutta schemes with an arbitrarily large number of stages at effectively no extra cost.222To be precise, it is only the number of parameters in the last layer of the neural network that increases linearly with the total number of stages. This enables us to take very large time steps while retaining stability and high predictive accuracy, therefore allowing us to resolve the entire spatio-temporal solution in a single step.

The result of applying this process to the Burgers’ equation is presented in figure 3. For illustration purposes, we start with a set of initial data at , and employ a physics informed neural network induced by an implicit Runge-Kutta scheme with 500 stages to predict the solution at time in a single step. The theoretical error estimates for this scheme predict a temporal error accumulation of iserles2009first, which in our case translates into an error way below machine precision, i.e., . To our knowledge, this is the first time that an implicit Runge-Kutta scheme of that high-order has ever been used. Remarkably, starting from smooth initial data at we can predict the nearly discontinuous solution at in a single time-step with a relative error of . This error is two orders of magnitude lower that the one reported in raissi2017numerical, and it is entirely attributed to the neural network’s capacity to approximate

, as well as to the degree that the sum of squared errors loss allows interpolation of the training data. The network architecture used here consists of 4 layers with 50 neurons in each hidden layer.


Figure 3: Burgers equation: Top: Solution along with the location of the initial training snapshot at and the final prediction snapshot at . Bottom: Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative error for this case is .

A detailed systematic study to quantify the effect of different network architectures is presented in table 3. By keeping the number of Runge-Kutta stages fixed to and the time-step size to , we have varied the number of hidden layers and the number of neurons per layer, and monitored the resulting relative error for the predicted solution at time . Evidently, as the neural network capacity is increased the predictive accuracy is enhanced.

LayersNeurons 10 25 50
1 4.1e-02 4.1e-02 1.5e-01
2 2.7e-03 5.0e-03 2.4e-03
3 3.6e-03 1.9e-03 9.5e-04
Table 3: Burgers’ equation: Relative final prediction error measure in the norm for different number of hidden layers and neurons in each layer. Here, the number of Runge-Kutta stages is fixed to 500 and the time-step size to .

The key parameters controlling the performance of our discrete time algorithm are the total number of Runge-Kutta stages and the time-step size . In table 4 we summarize the results of an extensive systematic study where we fix the network architecture to 4 hidden layers with 50 neurons per layer, and vary the number of Runge-Kutta stages and the time-step size . Specifically, we see how cases with low numbers of stages fail to yield accurate results when the time-step size is large. For instance, the case corresponding to the classical trapezoidal rule, and the case corresponding to the -order Gauss-Legendre method, cannot retain their predictive accuracy for time-steps larger than 0.2, thus mandating a solution strategy with multiple time-steps of small size. On the other hand, the ability to push the number of Runge-Kutta stages to 32 and even higher allows us to take very large time steps, and effectively resolve the solution in a single step without sacrificing the accuracy of our predictions. Moreover, numerical stability is not sacrificed either as implicit Runge-Kutta is the only family of time-stepping schemes that remain A-stable regardless of their order, thus making them ideal for stiff problems iserles2009first. These properties are unprecedented for an algorithm of such implementation simplicity, and illustrate one of the key highlights of our discrete time approach.

0.2 0.4 0.6 0.8
1 3.5e-02 1.1e-01 2.3e-01 3.8e-01
2 5.4e-03 5.1e-02 9.3e-02 2.2e-01
4 1.2e-03 1.5e-02 3.6e-02 5.4e-02
8 6.7e-04 1.8e-03 8.7e-03 5.8e-02
16 5.1e-04 7.6e-02 8.4e-04 1.1e-03
32 7.4e-04 5.2e-04 4.2e-04 7.0e-04
64 4.5e-04 4.8e-04 1.2e-03 7.8e-04
100 5.1e-04 5.7e-04 1.8e-02 1.2e-03
500 4.1e-04 3.8e-04 4.2e-04 8.2e-04
Table 4: Burgers’ equation: Relative final prediction error measured in the norm for different number of Runge-Kutta stages and time-step sizes . Here, the network architecture is fixed to 4 hidden layers with 50 neurons in each layer.

3.1.1 Example (Allen-Cahn Equation)

This example aims to highlight the ability of the proposed discrete time models to handle different types of nonlinearity in the governing partial differential equation. To this end, let us consider the Allen-Cahn equation along with periodic boundary conditions

(13)

The Allen-Cahn equation is a well-known equation from the area of reaction-diffusion systems. It describes the process of phase separation in multi-component alloy systems, including order-disorder transitions. For the Allen-Cahn equation, the nonlinear operator in equation (9) is given by

and the shared parameters of the neural networks (10) and (11) can be learned by minimizing the sum of squared errors

(14)

where

and

Here, corresponds to the data at time . We have generated a training and test data-set set by simulating the Allen-Cahn equation (13) using conventional spectral methods. Specifically, starting from an initial condition and assuming periodic boundary conditions and , we have integrated equation (13) up to a final time using the Chebfun package driscoll2014chebfun with a spectral Fourier discretization with 512 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step .

In this example, we assume initial data points that are randomly sub-sampled from the exact solution at time , and our goal is to predict the solution at time using a single time-step with size . To this end, we employ a discrete time physics informed neural network with 4 hidden layers and 200 neurons per layer, while the output layer predicts 101 quantities of interest corresponding to the Runge-Kutta stages , , and the solution at final time . Figure 4 summarizes our predictions after the network has been trained using the loss function of equation (14). Evidently, despite the complex dynamics leading to a solution with two sharp internal layers, we are able to obtain an accurate prediction of the solution at using only a small number of scattered measurements at .

Figure 4: Allen-Cahn equation: Top: Solution along with the location of the initial training snapshot at and the final prediction snapshot at . Bottom: Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative error for this case is .

4 Summary and Discussion

We have introduced physics informed neural networks, a new class of universal function approximators that is capable of encoding any underlying physical laws that govern a given data-set, and can be described by partial differential equations. In this work, we design data-driven algorithms for inferring solutions to general nonlinear partial differential equations, and constructing computationally efficient physics-informed surrogate models. The resulting methods showcase a series of promising results for a diverse collection of problems in computational science, and open the path for endowing deep learning with the powerful capacity of mathematical physics to model the world around us. As deep learning technology is continuing to grow rapidly both in terms of methodological and algorithmic developments, we believe that this is a timely contribution that can benefit practitioners across a wide range of scientific domains. Specific applications that can readily enjoy these benefits include, but are not limited to, data-driven forecasting of physical processes, model predictive control, multi-physics/multi-scale modeling and simulation.

We must note however that the proposed methods should not be viewed as replacements of classical numerical methods for solving partial differential equations (e.g., finite elements, spectral methods, etc.). Such methods have matured over the last 50 years and, in many cases, meet the robustness and computational efficiency standards required in practice. Our message here, as advocated in Section 3, is that classical methods such as the Runge-Kutta time-stepping schemes can coexist in harmony with deep neural networks, and offer invaluable intuition in constructing structured predictive algorithms. Moreover, the implementation simplicity of the latter greatly favors rapid development and testing of new ideas, potentially opening the path for a new era in data-driven scientific computing. This will be further highlighted in the second part of this paper in which physics informed neural networks are put to the test of data-driven discovery of partial differential equations.

Finally, in terms of future work, one pressing question involves addressing the problem of quantifying the uncertainty associated with the neural network predictions. Although this important element was naturally addressed in previous work employing Gaussian processes raissi2017numerical, it not captured by the proposed methodology in its present form and requires further investigation.

Acknowledgements

This work received support by the DARPA EQUiPS grant N66001-15-2-4055, the MURI/ARO grant W911NF-15-1-0562, and the AFOSR grant FA9550-17-1-0013. All data and codes used in this manuscript are publicly available on GitHub at https://github.com/maziarraissi/PINNs.

References