DeepXDE: A deep learning library for solving differential equations

07/10/2019 ∙ by Lu Lu, et al. ∙ Brown University 33

Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic PDEs. Moreover, PINNs solve inverse problems as easily as forward problems. We propose a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different examples. More broadly, DeepXDE contributes to the more rapid development of the emerging Scientific Machine Learning field.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last 15 years, deep learning in the form of deep neural networks (NNs), has been used very effectively in diverse applications [20]

, such as computer vision and natural language processing. Despite the remarkable success in these and related areas, deep learning has not yet been widely used in the field of scientific computing. However, more recently, solving partial differential equations (PDEs) via deep learning has emerged as a potentially new sub-field under the name of Scientific Machine Learning (SciML) 

[3].

To solve a PDE via deep learning, a key step is to constrain the neural network to minimize the PDE residual, and several approaches have been proposed to accomplish this. Compared to the traditional mesh-based methods, such as the finite difference method (FDM) and the finite element method (FEM), deep learning could be a mesh-free approach by taking advantage of the automatic differentiation [30]

, and could break the curse of dimensionality 

[28, 12]. Among these approaches, some can only be applied to particular types of problems, such as image-like input domain [16, 21, 39] or parabolic PDEs [4, 13]. Some researchers adopt the variational form of PDEs and minimize the corresponding energy functional [10, 14]. However, not all PDEs can be derived from a known functional, and thus Galerkin type projections have also been considered [22]. Alternatively, one could use the PDE in strong form directly [9, 33, 18, 19, 5, 32, 30]; in this form, automatic differentiation could be used directly to avoid truncation errors and the numerical quadrature errors of variational forms. This strong form approach was introduced in [30] coining the term physics-informed neural networks (PINNs). An attractive feature of PINNs is that it can be used to solve inverse problems with minimum change of the code for forward problems [30, 31]. In addition, PINNs have been further extended to solve integro-differential equations (IDEs), fractional differential equations (FDEs) [25], and stochastic differential equations (SDEs) [38, 36, 24, 37].

In this paper, we present various PINN algorithms implemented in a Python library DeepXDE111Source code is published under the Apache License, Version 2.0 on GitHub. https://github.com/lululxvi/deepxde, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering (CSE). DeepXDE can be used to solve multi-physics problems, and supports complex-geometry domains based on the technique of constructive solid geometry (CSG), hence avoiding tedious and time-consuming computational geometry tasks. By using DeepXDE, time-dependent PDEs can be solved as easily as steady states by only defining the initial conditions. In addition to the main workflow of DeepXDE, users can readily monitor and modify the solution process via callback functions, e.g., monitoring the Fourier spectrum of the neural network solution, which can reveal the leaning mode of the NN Fig. 2. Last but not least, DeepXDE is designed to make the user code stay compact and manageable, resembling closely the mathematical formulation.

The paper is organized as follows. In Section 2, after briefly introducing deep neural networks, we present the algorithm, approximation theory, and error analysis of PINNs, and make a comparison between PINNs and FEM. We then discuss how to use PINNs to solve integro-differential equations and inverse problems. In addition, we propose the residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. In Section 3, we introduce the usage of our library, DeepXDE, and its customizability. In Section 4, we demonstrate the capability of PINNs and friendly use of DeepXDE for five different examples. Finally, we conclude the paper in Section 5.

2 Algorithm and theory of physics-informed neural networks

In this section, we first provide a brief overview of deep neural networks, and present the algorithm and theory of PINNs for solving PDEs. We then make a comparison between PINNs and FEM, and discuss how to use PINNs to solve integro-differential equations and inverse problems. Next we propose RAR, an efficient way to select the residual points adaptively during the training process.

2.1 Deep neural networks

Mathematically, a deep neural network is a particular choice of a compositional function. The simplest neural network is the feed-forward neural network (FNN), also called multilayer perceptron (MLP), which applies linear and nonlinear transformations to the inputs recursively. Although many different types of neural networks have been developed in the past decades, such as the convolutional neural network and the recurrent neural network. In this paper we consider FNN, which is sufficient for most PDE problems, and residual neural network (ResNet), which is easier to train for deep networks. However, it is straightforward to employ other types of neural networks.

Let be a -layer neural network, or a -hidden layer neural network, with neurons in the -th layer (,

). Let us denote the weight matrix and bias vector in the

-th layer by and

, respectively. Given a nonlinear activation function

, which is applied element-wisely, the FNN is recursively defined as follows:

input layer:
hidden layers:
output layer:

see also a visualization of a neural network in Fig. 1. Commonly used activation functions include the logistic sigmoid , the hyperbolic tangent (

), and the rectified linear unit (ReLU,

).

2.2 Physics-informed neural networks for solving PDEs

We consider the following PDE parameterized by for the solution with defined on a domain :

(1)

with suitable boundary conditions

where could be Dirichlet, Neumann, Robin, or periodic boundary conditions. For time-dependent problems, we consider time as a special component of , and contains the temporal domain. The initial condition can be simply treated as a special type of Dirichlet boundary condition on the spatio-temporal domain.

Figure 1: Schematic of a PINN for solving the diffusion equation with mixed boundary conditions (BC) on and on . The initial condition (IC) is treated as a special type of boundary conditions. and denote the two sets of residual points for the equation and BC/IC.
  1. Construct a neural network with parameters .

  2. Specify the two training sets and for the equation and boundary/initial conditions.

  3. Specify a loss function by summing the weighted

    norm of both the PDE equation and boundary condition residuals.

  4. Train the neural network to find the best parameters by minimizing the loss function

Procedure 1 The PINN algorithm for solving differential equations.

The algorithm of PINN [19, 30] is shown in Procedure 1, and visually in the schematic of Fig. 1 solving a diffusion equation with mixed boundary conditions on and on . We explain each step as follows. In a PINN, we first construct a neural network as a surrogate of the solution , which takes the input and outputs a vector with the same dimension as . Here, is the set of all weight matrices and bias vectors in the neural network . One advantage of PINNs by choosing neural networks as the surrogate of is that we can take the derivatives of with respect to its input

by applying the chain rule for differentiating compositions of functions using the automatic differentiation (AD), which is conveniently integrated in machine learning packages, such as TensorFlow 

[1]

and PyTorch 

[26].

In the next step, we need to restrict the neural network to satisfy the physics imposed by the PDE and boundary conditions. It is hard to restrict in the whole domain, but instead we restrict on some scattered points, i.e., the training data of size . In addition, is comprised of two sets and , which are the points in the domain and on the boundary, respectively. We refer and as the sets of “residual points”.

To measure the discrepancy between the neural network and the constraints, we consider the loss function defined as the weighted summation of the norm of residuals for the equation and boundary conditions:

(2)

where

and and are the weights. The loss involves derivatives, such as the partial derivative or the normal derivative at the boundary , which are handled via AD.

In the last step, the procedure of searching for a good by minimizing the loss is called “training”. Considering the fact that the loss is highly nonlinear and non-convex with respect to [6], we usually minimize the loss function by gradient-based optimizers, such as gradient descent, Adam [17], and L-BFGS [8].

In the algorithm of PINN introduced above, we enforce soft constraints of boundary/initial conditions through the loss . This approach can be used for complex domains and any type of boundary conditions. On the other hand, it is possible to enforce hard constraints for simple cases [18]. For example, when the boundary condition is with , we can simply choose the surrogate model as to satisfy the boundary condition automatically, where is a neural network.

We note that it is very flexible to choose the residual points , and here we provide three possible strategies:

  1. We specify the residual points at the beginning of training, which could be grid points on a lattice or random points, and never change them during the training process.

  2. In each optimization iteration, we select randomly different residual points.

  3. We improve the location of the residual points adaptively during training, e.g., the method proposed in Section 2.7.

When the number of residual points is very large, it is computationally expensive to calculate the loss and gradient in every iteration. Instead of using all residual points, we can split the residual points into small batches, and in each iteration we only use one batch to calculate the loss and update model parameters, which is called mini-batch gradient descent. The aforementioned strategy (2), i.e., re-sampling in each step, is a special case of mini-batch gradient descent by choosing with .

Recent studies show that for function approximation, neural networks learn target functions from low to high frequencies [29, 35], but here we show that the learning mode of PINNs is different due to the existence of high-order derivatives. For example, when we approximate the function in by a NN, the function is learned from low to high frequency (Fig. 2A). However, when we employ a PINN to solve the Poisson equation with zero boundary conditions in the same domain, all frequencies are learned almost simultaneously (Fig. 2B). Interestingly, by comparing Fig. 2A and Fig. 2B we can see that at least in this case solving the PDE using a PINN is faster than approximating a function using a NN. We can monitor this training process using the callback functions in our library DeepXDE as discussed later.

Figure 2: Convergence of the amplitude for each frequency during the training process. (A) A neural network is trained to approximate the function . The color represents amplitude values with the maximum amplitude for each frequency normalized to 1. (B) A PINN is used to solve the Poisson equation with zero boundary conditions. We use a neural network of 4 hidden layers and 20 neurons per layer. The learning rate is chosen as , and 500 random points are sampled for training.

2.3 Approximation theory and error analysis for PINNs

One fundamental question related to PINNs is whether there exists a neural network satisfying both the PDE equation and the boundary conditions, i.e., whether there exists a neural network that can simultaneously and uniformly approximate a function and its partial derivatives. To address this question, we first introduce some notation. Let be the set of -dimensional nonnegative integers. For , we set , and

We say if for all , . Then, we recall the following theorem of derivative approximation using single hidden layer neural networks due to Pinkus [27].

Theorem 1

Let , , and set . Assume and is not a polynomial. Then the space of single hidden layer neural nets

is dense in

i.e., for any , any compact , and any , there exists a satisfying

for all for which for some .

Theorem 1 shows that feed-forward neural nets with enough neurons can simultaneously and uniformly approximate any function and its partial derivatives. However, neural networks in practice have limited size. Let denote the family of all the functions that can be represented by our chosen neural network architecture. The solution is unlikely to belong to the family , and we define as the best function in close to (Fig. 3). Because we only train the neural network on the training set , we define as the neural network whose loss is at global minimum. For simplicity, we assume that , and are well defined and unique. Finding by minimizing the loss is often computationally intractable [6], and our optimizer returns an approximate solution .

Figure 3: Illustration of errors of a PINN. The total error consists of the approximation error, the optimization error, and the generalization error. Here, is the PDE solution, is the best function close to in the function space , is the neural network whose loss is at a global minimum, and is the function obtained by training a neural network.

We can then decompose the total error as [7]

The approximation error measures how closely can approximate . The generalization error is determined by the number/locations of residual points in and the capacity of the family

. Neural networks of larger size have smaller approximation errors but could lead to higher generalization errors, which is called bias-variance tradeoff. Overfitting occurs when the generalization error dominates. In addition, the optimization error

stems from the loss function complexity and the optimization setup, such as learning rate and number of iterations.

2.4 Comparison between PINNs and FEM

To further explain the ideas of PINNs and to help those with the knowledge of FEM understand PINNs more easily, we make a comparison between PINNs and FEM point by point (Table 1):

  • In FEM we approximate the solution by a piecewise polynomial with point values to be determined, while in PINNs we construct a neural network as the surrogate model parameterized by weights and biases.

  • FEM typically requires a mesh generation, while PINN is totally mesh-free, and we can use either a grid or random points.

  • FEM converts a PDE to an algebraic system, using the stiffness matrix and mass matrix, while PINN embeds the PDE and boundary conditions into the loss function.

  • In the last step, the algebraic system in FEM is solved exactly by a linear solver, but the network in PINN is learned by a gradient-based optimizer.

At a more fundamental level, PINNs provide a nonlinear approximation to the function and its derivatives whereas FEM represent a linear approximation.

PINN FEM
Basis function Neural network (nonlinear) Piecewise polynomial (linear)
Parameters Weights and biases Point values
Training points Scattered points (mesh-free) Mesh points
PDE embedding Loss function Algebraic system
Parameter solver Gradient-based optimizer Linear solver
Errors , and (Section 2.3) Approximation/quadrature errors
Table 1: Comparison between PINN and FEM.

2.5 PINNs for solving integro-differential equations

When solving integro-differential equations (IDEs), we still employ the automatic differentiation technique to analytically derive the integer-order derivatives, while we approximate integral operators numerically using classical methods (Fig. 4[25], such as Gaussian quadrature. Therefore, we introduce a fourth error component, the discretization error , due to the approximation of the integral by Gaussian quadrature.

For example, when solving

we first use Gaussian quadrature of degree to approximate the integral

and then we use a PINN to solve the following PDE instead of the original equation

PINNs can also be easily extended to solve FDEs [25] and SDEs [38, 36, 24, 37], but we do not discuss here such cases due to the page limit.

Figure 4: Schematic illustrating the modificaiton of the PINN algorithm for solving integro-differential equations. We employ the automatic differentiation technique to analytically derive the integer-order derivatives, and we approximate integral operators numerically using standard methods. (The figure is reproduced from [25].)

2.6 PINNs for solving inverse problems

In inverse problems, there are some unknown parameters in Eq. 1, but we have some extra information on some points besides the differential equation and boundary conditions:

PINNs solve inverse problems as easily as forward problems. The only difference between solving forward and inverse problems is that we add an extra loss term to Eq. 2:

where

We then optimize and together, and our solution is .

2.7 Residual-based adaptive refinement (RAR)

As we discussed in Section 2.2, the residual points are usually randomly distributed in the domain. This works well for most cases, but it may not be efficient for certain PDEs that exhibit solutions with steep gradients. Take the Burgers equation as an example, intuitively we should put more points near the sharp front to capture the discontinuity well. However, it is challenging, in general, to design a good distribution of residual points for problems whose solution is unknown. To overcome this challenge, we propose a residual-based adaptive refinement (RAR) method to improve the distribution of residual points during training process (Procedure 2), conceptually similar to FEM refinement methods [2]. The idea of RAR is that we will add more residual points in the locations where the PDE residual is large, and we repeat adding points until the mean residual

(3)

is smaller than a threshold , where is the volume of .

  1. Select the initial residual points , and train the neural network for a limited number of iterations.

  2. Estimate the mean PDE residual in Eq. 3 by Monte Carlo integration, i.e., by the average of values at a set of randomly sampled locations :

  3. Stop if . Otherwise, add new points with the largest residuals in to , and go to Step 2.

Procedure 2 RAR for improving the distribution of residual points for training.

3 DeepXDE usage and customization

In this section, we introduce the usage of DeepXDE and how to customize DeepXDE to meet new demands.

3.1 Usage

DeepXDE makes the code stay compact and nice, resembling closely the mathematical formulation. Solving differential equations in DeepXDE is no more than specifying the problem using the build-in modules, including computational domain (geometry and time), PDE equations, boundary/initial conditions, constraints, training data, neural network architecture, and training hyperparameters. The workflow is shown in Procedure 

3 and Fig. 5.

  1. Specify the computational domain using the geometry module.

  2. Specify the PDE using the grammar of TensorFlow.

  3. Specify the boundary and initial conditions.

  4. Combine the geometry, PDE and boundary/initial conditions together into data.PDE or data.TimePDE for time-independent problems or for time-dependent problems, respectively. To specify training data, we can either set the specific point locations, or only set the number of points and then DeepXDE will sample the required number of points on a grid or randomly.

  5. Construct a neural network using the maps module.

  6. Define a Model by combining the PDE problem in Step 4 and the neural net in Step 5.

  7. Call Model.compile to set the optimization hyperparameters, such as optimizer and learning rate. The weights in Eq. 2 can be set here by loss_weights.

  8. Call Model.train to train the network from random initialization or a pre-trained model using the argument model_restore_path. It is extremely flexible to monitor and modify the training behavior using callbacks.

  9. Call Model.predict to predict the PDE solution at different locations.

Procedure 3 Usage of DeepXDE for solving differential equations.
Figure 5: Flowchart of DeepXDE corresponding to Procedure 3. The white boxes define the PDE problem and the training hyperparameters. The blue boxes combine the PDE problem and training hyperparameters in the white boxes. The orange boxes are the three steps (from right to left) to solve the PDE.

In DeepXDE, The built-in primitive geometries include interval, triangle, rectangle, polygon, disk, cuboid and sphere. Other geometries can be constructed from these primitive geometries using three boolean operations: union (|), difference (-) and intersection (&). This technique is called constructive solid geometry (CSG), see Fig. 6 for examples. CSG supports both two-dimensional and three-dimensional geometries.

Figure 6: Examples of constructive solid geometry (CSG) in 2D. (left) A and B represent the rectangle and circle, respectively. The union , difference , and intersection are constructed from A and B. (right) A complex geometry (top) is constructed from a polygon, a rectangle and two circles (bottom) through the union, difference, and intersection operations. This capability is included in the module geometry of DeepXDE.

DeepXDE supports four standard boundary conditions, including Dirichlet (DirichletBC), Neumann (NeumannBC), Robin (RobinBC), and periodic (PeriodicBC). The initial condition can be defined using IC. There are two types of neural networks available in DeepXDE: feed-forward neural network (maps.FNN) and residual neural network (maps.ResNet). It is also convenient to choose different training hyperparameters, such as loss types, metrics, optimizers, learning rate schedules, initializations and regularizations.

In addition to solving differential equations, DeepXDE can also be used to approximate functions from a dataset with constraints, and approximate functions from multi-fidelity data using the method proposed in [23].

3.2 Customizability

All the components of DeepXDE are loosely coupled, and thus DeepXDE is well-structured and highly configurable. In this subsection, we discuss how to customize DeepXDE to meet the new demands.

3.2.1 Geometry

As we introduced above, DeepXDE has already supported 7 basic geometries and the CSG technique. However, it is still possible that the user needs a new geometry, which cannot be constructed in DeepXDE. In this situation, a new geometry can be defined as shown in Procedure 4.

[linenos]Python class MyGeometry(Geometry): def inside(self, x): ”””Check if x is inside the geometry.””” def on_boundary(self, x): ”””Check if x is on the geometry boundary.””” def boundary_normal(self, x): ”””Compute the unit normal at x for Neumann or Robin boundary conditions.””” def periodic_point(self, x, component): ”””Compute the periodic image of x for periodic boundary condition.””” def uniform_points(self, n, boundary=True): ”””Compute the equispaced point locations in the geometry.””” def random_points(self, n, random=”pseudo”): ”””Compute the random point locations in the geometry.””” def uniform_boundary_points(self, n): ”””Compute the equispaced point locations on the boundary.””” def random_boundary_points(self, n, random=”pseudo”): ”””Compute the random point locations on the boundary.”””

Procedure 4 Customization of the new geometry module MyGeometry. The class methods should only be implemented as needed.

3.2.2 Neural networks

DeepXDE currently supports two neural networks: feed-forward neural network (maps.FNN) and residual neural network (maps.ResNet). A new network can be added as shown in Procedure 5.

[linenos]Python class MyNet(Map): @property def inputs(self): ”””Return the net inputs.””” @property def outputs(self): ”””Return the net outputs.””” @property def targets(self): ”””Return the targets of the net outputs.””” def build(self): ”””Construct the network.”””

Procedure 5 Customization of the neural network MyNet.

3.2.3 Callbacks

It is usually a good strategy to monitor the training process of the neural network, and then make modifications in real time, e.g., change the learning rate. In DeepXDE, this can be implemented by adding a callback function, and here we only list a few commonly used ones already implemented in DeepXDE:

  • ModelCheckpoint

    , which saves the model after certain epochs or when a better model is found.

  • OperatorPredictor, which calculates the values of the operator applying on the outputs.

  • FirstDerivative, which calculates the first derivative of the outpus with respect to the inputs. This is a special case of OperatorPredictor with the operator being the first derivative.

  • MovieDumper

    , which dumps the movie of the function during the training progress, and/or the movie of the spectrum of its Fourier transform.

It is very convenient to add other callback functions, which will be called at different stages of the training process, see Procedure 6.

[linenos]Python class MyCallback(Callback): def on_epoch_begin(self): ”””Called at the beginning of every epoch.””” def on_epoch_end(self): ”””Called at the end of every epoch.”””

Procedure 6 Customization of the callback MyCallback. Here, we only show how to add functions to be called at the beginning/end of every epoch. Similarly, we can call functions at the other training stages, such as at the beginning of training.

4 Demonstration examples

In this section, we use PINNs and DeepXDE to solve different problems. In all examples, we use the as the activation function, and the other hyperparameters are listed in Table 2. The weights , and in the loss function are set as 1. The codes of all examples are published in GitHub.

Example NN Depth NN Width Optimizer Learning rate # Iterations
1 4 50 Adam, L-BFGS 0.001 50000
2 3 20 Adam, L-BFGS 0.001 15000
3 3 40 Adam 0.001 60000
4 3 20 Adam 0.001 80000
5 4 20 L-BFGS - -
Table 2: Hyperparameters used for the following 5 examples. “Adam, L-BFGS” represents that we first use Adam for a certain number of iterations, and then switch to L-BFGS. The optimizer L-BFGS does not require learning rate, and the neural network is trained until convergence, so the number of iterations is also ignored for L-BFGS.

4.1 Poisson equation over an L-shaped domain

Consider the following two-dimensional Poisson equation over an L-shaped domain :

We choose 1200 and 120 random points drawn from a uniform distribution as

and , respectively. The PINN solution is given in Fig. 7B. For comparison, we also present the numerical solution obtained by using the spectral element method (SEM) [15] (Fig. 7A). The result of the absolute error is shown in Fig. 7C.

Figure 7: Example 4.1. Comparison of the PINN solution with the solution obtained by using spectral element method (SEM). (A) the SEM solution , (B) the PINN solution , (C) the absolute error .

4.2 RAR for Burgers equation

We consider the Burgers equation:

Let . Initially, we randomly select 2500 points (spatio-temporal domain) as the residual points, and then 40 more residual points are added adaptively via RAR developed in Section 2.7 with and . We compare the PINN solution with RAR and the PINN solution based on 2540 randomly selected training data (Fig. 8), and demonstrate that PINN with RAR can capture the discontinuity much better. For a comparison, the finite difference solutions using Crank-Nicolson scheme for space discretization and forward Euler scheme for time discretization are also shown in Fig. 8A.

Figure 8: Example 4.2. Comparisons of the PINN solutions with and without RAR. (A) The cyan line, green line and red line represent the reference solution of from [30], PINN solution without RAR, the PINN solution with RAR at , respectively. For the finite difference (FD) method, spatial-temporal grid points are used to achieve a good solution (blue line). If only points are used, the FD solution has large oscillations around the discontinuity (brown line). (B)

relative error versus the number of residual points. The red solid line and shaded region correspond to the mean and one-standard-deviation band for the

relative error of PINN with RAR, respectively. The blue dashed line is the mean and one-standard-deviation for the error of PINN using 2540 random residual points. The mean and standard deviation are obtained from 10 runs with random initial residual points.

4.3 Inverse problem for the Lorenz system

Consider the parameter identification problem of the following Lorenz system

with the initial condition , where , and are the three parameters to be identified from the observations at certain times. The observations are produced by solving the above system to using Runge-Kutta (4,5) with the underlying true parameters . We choose 400 uniformly distributed random points and 25 equispaced points as the residual points and , respectively. The evolution trajectories of , and are presented in Fig. 9A, with the final identified values of .

Figure 9: Examples 4.3 and 4.4. The identified values of (A) the Lorenz system and (B) diffusion-reaction system converge to the true values during the training process. The parameter values are scaled for plotting.

4.4 Inverse problem for diffusion-reaction systems

A diffusion-reaction system in porous media for the solute concentrations , and () is described by

where is the effective diffusion coefficient, and is the effective reaction rate. Because and depend on the pore structure and are difficult to measure directly, we estimate and based on 40000 observations of the concentrations and in the spatio-temporal domain. The identified () and (0.0971) are displayed in Fig. 9B, which agree well with their true values.

4.5 Volterra IDE

Here, we consider the first-order integro-differential equation of the Volterra type in the domain :

with the exact solution We solve this IDE using the method in Section 2.5, and approximate the integral using Gaussian-Legendre quadrature of degree 20. The relative error is , and the solution is shown in Fig. 10.

Figure 10: Example 4.5. The PINN algorithm for solving Volterra IDE. The blue solid line is the exact solution, and the red dash line is the numerical solution from PINN. 12 equispaced residual points (black dots) are used.

5 Concluding Remarks

In this paper, we present the algorithm, approximation theory, and error analysis of the physics-informed neural networks (PINNs) for solving different types of partial differential equations (PDEs). Compared to the traditional numerical methods, PINNs employ automatic differentiation to handle differential operators, and thus they are mesh-free. Unlike numerical differentiation, automatic differentiation does not differentiate the data and hence it can tolerate noisy data for training. We also discuss how to extend PINNs to solve other types of differential equations, such as integro-differential equations, and also how to solve inverse problems. In addition, we propose a residual-based adaptive refinement (RAR) method to improve the distribution of residual points during the training process, and thus increase the training efficiency.

To benefit both the education and the computational science communities, we have developed the Python library DeepXDE, an implementation of PINNs. By introducing the usage of DeepXDE, we show that DeepXDE enables user codes to be compact and follow closely the mathematical formulation. We also demonstrate how to customize DeepXDE to meet new demands. Our numerical examples for forward and inverse problems verify the effectiveness of PINNs and the capability of DeepXDE. Scientific machine learning is emerging as a new and potentially powerful alternative to classical scientific computing, so we hope that libraries such as DeepXDE will accelerate this development and will make it accessible to the classroom but also to other researchers who may find the need to adopt PINN-like methods in their research, which can be very effective especially for inverse problems.

Despite the aforementioned advantages, PINNs still have some limitations. For forward problems, PINNs are currently slower than finite elements but this can be alleviated via offline training [39, 34]. For long time integration, one can also use time-parallel methods to simultaneously compute on multiple GPUs for shorter time domains. Another limitation is the search for effective neural network architectures, which is currently done empirically by users; however, emerging meta-learning techniques can be used to automate this search, see [40, 11]. Moreover, while here we enforce the strong form of PDEs, which is easy to be implemented by automatic differentiation, alternative weak/variational forms may also be effective, although they require the use of quadrature grids. Many other extensions for multi-physics and multi-scale problems are possible across different scientific disciplines by creatively designing the loss function and introducing suitable solution spaces. For instance, in the five examples we present here, we only assume data on scattered points, however, in geophysics or biomedicine we may have mixed data in the form of images and point measurements. In this case, we can design a composite neural network consisting of one convolutional neural network and one PINN sharing the same set of parameters, and minimize the total loss which could be a weighted summation of multiple losses from each neural network.

Acknowledgments

This work is supported by the DOE PhILMs project (No. de-sc0019453), the AFOSR grant FA9550-17-1-0013, and the DARPA-AIRA grant HR00111990025.

References