PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network

11/30/2018 ∙ by Zichao Long, et al. ∙ Peking University 0

Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work Long2018PDE where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Differential equations, especially partial differential equations(PDEs), play a prominent role in many disciplines to describe the governing physical laws underlying a given system of interest. Traditionally, PDEs are derived mathematically or physically based on some basic principles, e.g. from Schrödinger’s equations in quantum mechanics to molecular dynamic models, from Boltzmann equations to Navier-Stokes equations, etc. However, many complex systems in modern applications (such as many problems in multiphase flow, neuroscience, finance, biological science, etc.) still have eluded mechanisms, and the governing equations of these systems are commonly obtained by empirical formulas [2, 3]. With the recent rapid development of sensors, computational power, and data storage in the last decade, huge quantities of data can now be easily collected, stored and processed. Such vast quantity of data offers new opportunities for data-driven discovery of (potentially new) physical laws. Then, one may ask the following interesting and intriguing question: can we learn a PDE model to approximate the observed complex dynamic data?

1.1 Existing Work and Motivations

Earlier attempts on data-driven discovery of hidden physical laws include [4, 5]

. Their main idea is to compare numerical differentiations of the experimental data with analytic derivatives of candidate functions, and apply the symbolic regression and the evolutionary algorithm to determining the nonlinear dynamic system. When the form of the nonlinear response function of a PDE is known, except for some scalar parameters,

[6] presented a framework to learn these unknown parameters by introducing regularity between two consecutive time step using Gaussian process. Later in [7]

, a PDE constraint interpolation method was introduced to uncover the unknown parameters of the PDE model. An alternative approach is known as the sparse identification of nonlinear dynamics (SINDy)

[8, 9, 10, 11, 12, 13]. The key idea of SINDy is to first construct a dictionary of simple functions and partial derivatives that are likely to appear in the equations. Then, it takes the advantage of sparsity promoting techniques (e.g. regularization) to select candidates that most accurately represent the data. In [14], the authors studied the problem of sea surface temperature prediction (SSTP). They assumed that the underlying physical model was an advection-diffusion equation. They designed a special neural network according to the general solution of the equation. Comparing with traditional numerical methods, their approach showed improvements in accuracy and computation efficiency.

Recent work greatly advanced the progress of PDE identification from observed data. However, SINDy requires to build a sufficiently large dictionary which may lead to high memory load and computation cost, especially when the number of model variables is large. Furthermore, the existing methods based on SINDy treat spatial and temporal information of the data separately and does not take full advantage of the temporal dependence of the PDE model. Although the framework presented by [6, 7] is able to learn hidden physical laws using less data than the approach based on SINDy, the explicit form of the PDEs is assumed to be known except for a few scalar learnable parameters. The approach of [14] is specifically designed for advection-diffusion equations, and cannot be readily extended to other types of equations. Therefore, extracting governing equations from data in a less restrictive setting remains a great challenge.

The main objective of this paper is to design a transparent

deep neural network to uncover hidden PDE models from observed complex dynamic data with minor prior knowledge on the mechanisms of the dynamics, and to perform accurate predictions at the same time. The reason we emphasize on both model recovery and prediction is because: 1) the ability to conduct accurate long-term prediction is an important indicator of accuracy of the learned PDE model (the more accurate is the prediction, the more confident we have on the PDE model we recover from the data); 2) the trained neural network can be readily used in applications and does not need to be re-trained when initial conditions are altered. Our inspiration comes from the latest development of deep learning techniques in computer vision. An interesting fact is that some popular networks in computer vision, such as ResNet

[15, 16], have close relationship with ODEs/PDEs and can be naturally merged with traditional computational mathematics in various tasks [17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. However, existing deep networks designed in deep learning mostly emphasis on expressive power and prediction accuracy. These networks are not transparent enough to be able to reveal the underlying PDE models, although they may perfectly fit the observed data and perform accurate predictions. Therefore, we need to carefully design the network by combining knowledge from deep learning and numerical PDEs.

1.2 Our Approach

The proposed deep neural network is an upgraded version of our original PDE-Net [1]. The main difference is the use of a symbolic network to approximate the nonlinear response function, which significantly relaxes the requirement on the prior knowledge on the PDEs to be recovered. During training, we no longer need to assume the general type of the PDE (e.g. convection, diffusion, etc.) is known. Furthermore, due to the lack of prior knowledge on the general type of the unknown PDE models, more carefully designed constraints on the convolution filters as well as the parameters of the symbolic network are introduced. We refer to this upgraded network as PDE-Net 2.0.

Assume that the PDE to be recovered takes the following generic form

PDE-Net 2.0 is designed as a feed-forward network by discretizing the above PDE using forward Euler in time and finite difference in space. The forward Euler approximation of temporal derivative makes PDE-Net 2.0 ResNet-like [15, 18, 21], and the finite difference is realized by convolutions with trainable kernels (or filters). The nonlinear response function is approximated by a symbolic neural network, which shall be referred to as . All the parameters of the and the convolution kernels are jointly learned from data. To grant full transparency to the PDE-Net 2.0, proper constraints are enforced on the and the filters. Full details on the architecture and constraints will be presented in Section 2.

1.3 Relation with Model Reduction

Data-driven discovery of hidden physical laws and model reduction have a lot in common. Both of them concern on representing observed data using relatively simple models. The main difference is that, model reduction emphasis more on numerical precision rather than acquiring the analytic form of the model.

It is common practice in model reduction to use a function approximator to express the unknown terms in the reduced models, such as approximating subgrid stress for large-eddy simulation[27, 28, 29] or approximating interatomic forces for coarse-grained molecular dynamic systems[30, 31]. Our work may serve as an alternative approach to model reduction and help with analyzing the reduced models.

1.4 Novelty

The particular novelties of our approach are that we impose appropriate constraints on the learnable filters and use a properly designed symbolic neural network to approximate the response function . Using learnable filters makes the PDE-Net 2.0 more flexible, and enables more powerful approximation of unknown dynamics and longer time prediction (see numerical experiments in Section 3 and Section 4). Furthermore, the constraints on the learnable filters and the use of a deep symbolic neural network enable us to uncover the analytic form of with minor prior knowledge on the dynamic, which is the main advantage of PDE-Net 2.0 over the original PDE-Net. In addition, the composite representation by the symbolic network is more efficient and flexible than SINDy. Therefore, the proposed PDE-Net 2.0 is distinct from the existing learning based methods to discover PDEs from data.

2 PDE-Net 2.0: Architecture, Constraints and Training

Given a series of measurements of some physical quantities with being the number of physical quantities of interest, we want to discover the governing PDEs from the observed data . We assume that the observed data are associated with a PDE that takes the following general form:

(1)

here , , , . Our objective is to design a feed-forward network, called PDE-Net 2.0, to approximate the unknown PDE (1) from its solution samples in the way that: 1) we are able to reveal the analytic form of the response function and the differential operators involved; 2) we can conduct long-term prediction on the dynamical behavior of the equation for any given initial conditions. There are two main components of the PDE-Net 2.0 that are combined together in the same network: one is automatic determination on the differential operators involved in the PDE and their discrete approximations; the other is to approximate the nonlinear response function . In this section, we start with discussions on the overall framework of the PDE-Net 2.0 and then introduce the details on these two components. Regularization and training strategies will be given near the end of this section.

2.1 Architecture of PDE-Net 2.0

Inspired by the dynamic system perspective of deep neural networks [17, 18, 19, 20, 21, 22], we consider forward Euler as the temporal discretization of the evolution PDE (1), and unroll the discrete dynamics to a feed-forward network. One may consider more sophisticated temporal discretization which naturally leads to different network architectures [21]. For simplicity, we focus on forward Euler in this paper.

2.1.1 -block:

Let be the predicted value at time based on . Then, we design an approximation framework as follows

(2)

Here, the operators are convolution operators with the underlying filters denoted by , i.e. . The operators , , , etc. approximate differential operators, i.e. . In particular, the operators is a certain averaging operator. The purpose of introducing the average operators in stead of simply using the identity is to improve the expressive power of the network and enables it to capture more complex dynamics.

Other than the assumption that the observed dynamics is governed by a PDE of the form (1), we assume that the highest order of the PDE is less than some positive integer. Then, we can assume that is a function of variables with known . The task of approximating in (1) is equivalent to a multivariate regression problem. In order to be able to identify the analytic form of , we use a symbolic neural network denote by to approximate , where denotes the depth of the network. Note that, if

is a vector function, we use multiple

to approximate the components of separately.

Combining the aforementioned approximation of differential operators and the nonlinear response function, we obtain an approximation framework (2) which will be referred to as a -block  (see Figure 1). Details of these two components can be found later in Section 2.2 and Section 2.3.

Figure 1: The schematic diagram of a -block.

2.1.2 Multiple -Blocks:

One -block only guarantees the accuracy of one-step dynamics, which does not take error accumulation into consideration. In order to facilitate a long-term prediction, we stack multiple -blocks into a deep network, and call this network the PDE-Net 2.0 (see Figure 2). The importance of stacking multiple -blocks will be demonstrated by our numerical experiments in Section 3 and 4.

The PDE-Net 2.0 can be easily described as: (1) stacking one -block multiple times; (2) sharing parameters in all -blocks. Given a set of observed data , training a PDE-Net 2.0 with -blocks needs to minimize the accumulated error , where is the output from the PDE-Net 2.0 (i.e. -blocks) with input , and is observed training data.

Figure 2: The schematic diagram of the PDE-Net 2.0.

2.2 Convolutions and Differentiations

In the original PDE-Net [1], the learnable filters are properly constrained so that we can easily identify their correspondence to differential operators. PDE-Net 2.0 adopts the same constrains as the original version of the PDE-Net. For completeness, we shall review the related notions and concepts, and provide more details.

A profound relationship between convolutions and differentiations was presented by [32, 33], where the authors discussed the connection between the order of sum rules of filters and the orders of differential operators. Note that the definition of convolution we use follows the convention of deep learning, which is defined as

This is essentially correlation instead of convolution in the mathematics convention. Note that if is of finite size, we use periodic boundary condition.

The order of sum rules is closely related to the order of vanishing moments in wavelet theory  

[34, 35]. We first recall the definition of the order of sum rules.

Definition 2.1 (Order of Sum Rules)

For a filter , we say to have sum rules of order , where , provided that

(3)

for all with and for all with but . If (3) holds for all with except for with certain and , then we say to have total sum rules of order .

In practical implementation, the filters are normally finite and can be understood as matrices. For an filter (

being an odd number), assuming the indices of

start from , (3) can be written in the following simpler form

The following proposition from [33] links the orders of sum rules with orders of differential operators.

Propositin 2.1

Let be a filter with sum rules of order . Then for a smooth function on , we have

(4)

If, in addition, has total sum rules of order for some , then

(5)

According to Proposition 2.1, an th order differential operator can be approximated by the convolution of a filter with order of sum rules. Furthermore, according to (5), one can obtain a high order approximation of a given differential operator if the corresponding filter has an order of total sum rules with . For example, consider filter

It has a sum rules of order , and a total sum rules of order . Thus, up to a constant and a proper scaling, corresponds to a discretization of with second order accuracy.

For an filter , define the moment matrix of as

(6)

where

(7)

We shall call the -element of the -moment of for simplicity. For any smooth function , we apply convolution on the sampled version of with respect to the filter . By Taylor’s expansion, one can easily obtain the following formula

(8)

From (8) we can see that filter can be designed to approximate any differential operator with prescribed order of accuracy by imposing constraints on .

For example, if we want to approximate (up to a constant) by convolution where is a filter, we can consider the following constrains on :

(9)

Here, means no constraint on the corresponding entry. The constraints described by the moment matrix on the left of (9) guarantee the approximation accuracy is at least first order, and the one on the right guarantees an approximation of at least second order. In particular, when all entries of are constrained, e.g.

the corresponding filter can be uniquely determined. In the PDE-Net 2.0, all filters are learned subjected to partial constraints on their associated moment matrices, with at least second order accuracy.

2.3 Design of : a Symbolic Neural Network

The symbolic neural network of the PDE-Net 2.0 is introduced to approximate the multivariate nonlinear response function of (1). Neural networks have recently been proven effective in approximating multivariate functions in various scenarios [36, 37, 38, 39, 40]. For the problem we have in hand, we not only require the network to have good expressive power, but also good transparency so that the analytic form of can be readily inferred after training. Our design of is motivated by EQL/EQL proposed by [41, 42].

The , as illustrated in Figure 3, is a network that takes an dimensional vector as input and has hidden layers.

Figure 3: The schematic diagram of

Figure 3 shows the symbolic neural network with two hidden layers, i.e. , where is a dyadic operation unit, e.g. multiplication or division. In this paper, we focus on multiplication, i.e. we take . Different from EQL/EQL, each hidden layer of the directly takes the outputs from the preceding layer as inputs, rather than a linear combination of them. Furthermore, it adds one additional variable (i.e. ) at each hidden layer. The can represent all polynomials of variables with the total number of multiplications not exceeding . If needed, one can add more operations to the to increase the capacity of the network.

Now, we show that is more compact than the dictionaries of SINDy. For that, we first introduce some notions.

Definition 2.2

Define the set of all polynomials of variables with the total number of multiplications not exceeding as . Here, the total number of multiplications of is counted as follows:

  • For any monomial of degree , if , then the number of multiplications of the monomial is counted as . When or , the count is 0.

  • For any polynomial , the total number of multiplications is counted as the sum of the number of multiplications of its monomials.

For example, and with are all members of . The elements in are of simple forms when is relatively small. The following proposition shows that can represent all polynomials of variables with the total number of multiplications not exceeding . Note that the actual capacity of is larger than , i.e. is a subset of the set of functions that can represent.

Propositin 2.2

For any , there exists a set of parameters for such that

Proof: We prove this proposition by induction. When , the conclusion obviously holds. Suppose the conclusion holds for . For any polynomial , we only need to consider the cases when has a total number of multiplications greater than 1.

We take any monomial of that has degree greater than 1, which we suppose take the form where is a monomial of variable . Then, can be written as . Define new variable . Then, we have

By the induction hypothesis, there exists a set of parameters such that .

We take the linear transform between the input layer and the first hidden layer of

as

Then, the output of the first hidden layer is . If we use it as the input of , we have

which concludes the proof.

SINDy constructs a dictionary that incudes all possible monomials up to a certain degree. Observe that there are totally monomials with variables and a degree not exceeding . Our symbolic network, however, is more compact than SINDy. The following proposition compares the complexity of and SINDy, whose proof is straightforward.

Propositin 2.3

Let and suppose have monomials of degree .

  • The memory load of that approximates is . The number of flops for evaluating is .

  • Constructing a dictionary with all possible polynomials of degree requires a memory load of , and evaluation of a linear combination of dictionary members requires flops.

We use the following example to show the advantage of over SINDy. Consider two variables and all of their derivatives of order :

Suppose the polynomial to be approximated is . For , the size of the dictionary of SINDy is and the computation of linear combination of the elements requires flops. The memory load of , however, is 15 and an evaluation of the network requires 180 flops. Therefore, can significantly reduce memory load and computation cost when input data is large. Note that when is large and small, is worse than SINDy. However, for system identification problems, we normally wish to obtain a compact representation (i.e. smaller ). Thus, takes full advantage of this prior knowledge and can significantly save on memory and computation cost which is crucial in the training of the PDE-Net 2.0.

2.4 Loss Function and Regularization

We adopt the following loss function to train the proposed PDE-Net 2.0:

where the hyper-parameters and are chosen as and . Now, we present details on each of the term of the loss function and introduce pseudo-upwind as an additional constraint on PDE-Net 2.0.

2.4.1 Data Approximation

Consider the data set , where is the number of -blocks and is the total number of samples. The index indicates the -th solution path with a certain initial condition of the unknown dynamics. Note that one can split a long solution path into multiple shorter ones. We would like to train the PDE-Net 2.0 with -blocks. For a given , every pair of the data , for each and , is a training sample, where is the input and is the label that we need to match with the output from the network. For that, we define the data approximation term as:

where is the output of the PDE-Net 2.0 with as the input.

2.4.2 Regularization: and

For a given threshold , defined Huber’s loss function as

Then, we define as

where are the filters of PDE-Net 2.0 and is the moment matrix of . We use this loss function to regularize the learnable filters to reduce overfitting. In our numerical experiments, we will use .

Given as defined above, we use it to enforce sparsity on the parameters of . This will help to reduce overfitting and enable more stable prediction. The loss function is defined as

We set in our numerical experiments.

2.4.3 Pseudo-upwind

In numerical PDEs, to ensure stability of a numerical scheme, we need to design conservation schemes or use upwind schemes [43, 44, 45]. This is also important for PDE-Net 2.0 during inferencing. However, the challenge we face is that we do not know apriori the form or the type of the PDE. Therefore, we introduce a method called pseudo-upwind to help with maintaining stability of the PDE-Net 2.0.

Given a 2D filter , define the flipping operators and as

and

In each -block of the PDE-Net 2.0, before we apply convolution with a filter, we first use to determine whether we should use the filter or flip it first. We use the following univariate PDE as an example to demonstrate our idea. Given PDE , suppose the input of a -block is . The algorithm of pseudo-upwind is described by the following Algorithm 1.

Input:

  
  
  if  then
     
  else
     
  end if
  if  then
     
  else
     
  end if
  

Return:

Algorithm 1 pseudo-upwind in -block
Remark 2.1

Note that the algorithm does not exactly enforce upwind in general. This is why we call it pseudo-upwind. We further note that:

  • Given a PDE of the form , we can use and to determine whether we should flip a filter or not.

  • For a vector PDE, such as , we can use, for instance, to determine how we should approximate and in the -block [44].

2.5 Initialization and training

In the PDE-Net 2.0, parameters can be divided into three groups: 1) moment matrices to generate convolution kernels; 2) the parameters of ; and 3) hyper-parameters, such as the number of filters, the size of filters, the number of -Blocks and number of hidden layers of , regularization weights , etc. The parameters of the are shared across the computation domain

, and are initialized by random sampling from a Gaussian distribution. For the filters, we initialize

with second order upwind scheme and central difference for all other filters.

We use layer-wise training to train the PDE-Net 2.0. We start with training the PDE-Net 2.0 on the first t-block with a batch of data, and then use the results of the first t-block as the initialization and restart training on the first two t-blocks with another batch. Repeat this procedure until we complete all t-blocks. Note that all the parameters in each of the t-block are shared across layers. In addition, we add a warm-up step before the training of the first t-block by fixing filters and setting regularization term to be 0 (i.e. ). The warm-up step is to obtain a good initial guess of the parameters of .

To demonstrate the necessity of having learnable filters, we will compare the PDE-Net 2.0 containing learnable filters with the PDE-Net 2.0 having fixed filters. To differentiate the two cases, we shall call the PDE-Net 2.0 with fixed filters the “Frozen-PDE-Net 2.0”.

3 Numerical Studies: Burgers Equation

Burgers’ equation is a fundamental partial differential equation in many areas such as fluid mechanics and traffic flow modeling. It has a lot in common with the Navier-Stokes equation, e.g. the same type of advective nonlinearity and the presence of viscosity.

3.1 Simulated data, training and testing

In this section we consider a 2-dimensional Burger’s equation with periodic boundary condition on ,

(10)

with where

The training data is generated by a finite difference scheme on a mesh and then restricted to a mesh. The temporal discretization is 2nd order Runge-Kutta with time step , the spatial discretization uses a 2nd order upwind scheme for and the central difference scheme for . The initial value takes the following form,

(11)

where , . Here,

represents the standard normal distribution and uniform distribution on

respectively. We also add noise to the generated data:

(12)

where , .

Suppose we know a priori that the order of the underlying PDE is no more than 2, we can use two to approximate the right-hand-side nonlinear response function of (10) component-wise. Let . We denote the two as and respectively. Then, each -block of the PDE-Net 2.0 can be written as

where are convolution operators.

During training and testing, the data is generated on-the-fly. The size of the filters that will be used is . The total number of parameters in and is 200, and the number of trainable parameters in moment matrices is 105 for filters ( (constraint on moment)). During training, we use BFGS, instead of SGD, to optimize the parameters. We use 28 data samples per batch to train each -block and we only construct the PDE-Net 2.0 up to 15 layers, which requires totally 420 data samples during the whole training procedure.

3.2 Results and discussions

We first demonstrate the ability of the trained PDE-Net 2.0 to recover the analytic form of the unknown PDE model. We use the symbolic math tool in python to obtain the analytic form of . Results are summarized in Table 1. As one can see from Table 1 that we can recover the terms of the Burger’s equation with good accuracy, and using learnable filters helps with the identification of the PDE model. Furthermore, the terms that are not included in the Burger’s equation all have relatively small weights in the (see Figure 7).

Correct PDE
Frozen-PDE-Net 2.0
PDE-Net 2.0
Table 1: PDE model identification.

We also demonstrate the ability of the trained PDE-Net 2.0 in prediction, i.e. the ability to generalize. After the PDE-Net 2.0 with -blocks () is trained, we randomly generate 1000 initial guesses based on (11) and (12), feed them to the PDE-Net 2.0, and measure the relative error between the predicted dynamics (i.e. the output of the PDE-Net 2.0) and the actual dynamics (obtained by solving (10) using high precision numerical scheme). The relative error between the true data and the predicted data is defined as where is the spatial average of . The error plots are shown in Figure 4. Some of the images of the predicted dynamics are presented in Figure 5 and the errors maps are presented in Figure 6. As we can see that, even trained with noisy data, the PDE-Net 2.0 is able to perform long-term prediction. Having multiple -blocks indeed improves predict accuracy. Furthermore, PDE-Net 2.0 performs significantly better than Frozen-PDE-Net 2.0.

Figure 4: Prediction errors of the PDE-Net 2.0(orange) and Frozen-PDE-Net 2.0(blue) with filters. In each plot, the horizontal axis indicates the time of prediction in the interval , and the vertical axis shows the relatively errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples. The darker regions indicate the 25%-75% percentile of the relative errors, which shows that PDE-Net 2.0 can predict very well in most cases.

Dynamics of

Dynamics of

Figure 5: The first row shows the images of the true dynamics. The last two rows show the images of the predicted dynamics using the Frozen-PDE-Net 2.0 (the second row) and PDE-Net 2.0 with 15 -blocks (the third row). Time step .

Error maps of

Error maps of

Figure 6: The error maps of the Frozen-PDE-Net 2.0 (the first row) and PDE-Net 2.0 (second row) having 15 -blocks. Time step .

3.3 Importance of

This subsection demonstrates the importance of enforcing sparsity on the . As we can see from Figure 7 that having sparsity constraints on the helps with suppressing the weights on the terms that do not exist in the Burger’s equation. Furthermore, Figure 8 shows that having sparsity constraint on the significantly reduces prediction errors.

Figure 7: The coefficients of the remainder terms for the -component (left) and -component (right) with the sparsity constraint on the (orange) and without the sparsity constraint (blue). The bands for both cases are computed based on 20 independent training.
Figure 8: Prediction errors of the PDE-Net 2.0 with (orange) and without (blue) sparsity constraint on the . In each plot, the horizontal axis indicates the time of prediction in the interval , and the vertical axis shows the relatively errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.

4 Numerical Studies: Diffusion Equation

Diffusion phenomenon has been studied in many applications in physics e.g. the collective motion of micro-particles in materials due to random movement of each particle, or modeling the distribution of temperature in a given region over time.

4.1 Simulated data, training and testing

Consider the 2-dimensional heat equation with periodic boundary condition on

(13)

where . The training data of the heat equation is generated by 2nd order Runge-Kutta in time with , and central difference scheme in space on a mesh. We then restrict the data to a mesh. The initial value is also generated from (11).

4.2 Results and discussions

The demonstration on the ability of the trained PDE-Net 2.0 to identify the PDE model is given in Table 2. As one can see from Table 2 that we can recover the terms of the heat equation with good accuracy. Furthermore, all the terms that are not included in the heat equation have much smaller weights in the .

We also demonstrate the ability of the trained PDE-Net 2.0 in prediction. The testing method is exactly the same as the method described in Section 3. Comparisons between PDE-Net 2.0 and Frozen-PDE-Net 2.0 are shown in Figure 9, where we can clearly see the advantage of learning the filters. Visualization of the predicted dynamics is given in Figure 10. All these results show that the learned PDE-Net 2.0 performs well in prediction.

Correct PDE
Frozen-PDE-Net 2.0
PDE-Net 2.0
Table 2: PDE model identification. Note that the largest term of the remainders (i.e. the ones that are not included in the table) is 8e-4 (for Frozen-PDE-Net 2.0) and 8e-5 (for PDE-Net 2.0).
Figure 9: Prediction errors of the PDE-Net 2.0 (orange) and Frozen-PDE-Net 2.0 (blue). In each plot, the horizontal axis indicates the time of prediction in the interval , and the vertical axis shows the relative errors. The banded curves indicate the 25%-100% percentile of the relative errors among 1000 test samples.
Figure 10: The first row shows the images of the true dynamics. The second row shows the predicted dynamics using the Frozen-PDE-Net 2.0 with 15 -blocks. The third row shows the predicted dynamics using the PDE-Net 2.0 with 15 -blocks. Here, .

5 Conclusions and Future Work

In this paper, we proposed a numeric-symbolic hybrid deep network, called PDE-Net 2.0, for PDE model recovery from observed dynamic data. PDE-Net 2.0 is able to recover the analytic form of the PDE model with minor assumptions on the mechanisms of the observed dynamics. For example, it is able to recover the analytic form of Burger’s equation with good confidence without any prior knowledge on the type of the equation. Therefore, PDE-Net 2.0 has the potential to uncover potentially new PDEs from observed data. Furthermore, after training, the network can perform accurate long-term prediction without re-training for new initial conditions. As part of the future work, we would like to apply the network to real biological dynamic data. We will further investigate the reliability of the network and explore the possibility to uncover new dynamical principles that have meaningful scientific explanations.

Acknowledgments

Zichao Long is supported by The Elite Program of Computational and Applied Mathematics for PhD Candidates of Peking University. Yiping Lu is supported by the Elite Undergraduate Training Program of the School of Mathematical Sciences at Peking University. Bin Dong is supported in part by NSFC 11671022.

References