VarNet: Variational Neural Networks for the Solution of Partial Differential Equations

12/16/2019 ∙ by Reza Khodayi-mehr, et al. ∙ Duke University 42

In this paper we propose a new model-based unsupervised learning method, called VarNet, for the solution of partial differential equations (PDEs) using deep neural networks (NNs). Particularly, we propose a novel loss function that relies on the variational (integral) form of PDEs as apposed to their differential form which is commonly used in the literature. Our loss function is discretization-free, highly parallelizable, and more effective in capturing the solution of PDEs since it employs lower-order derivatives and trains over measure non-zero regions of space-time. Given this loss function, we also propose an approach to optimally select the space-time samples, used to train the NN, that is based on the feedback provided from the PDE residual. The models obtained using VarNet are smooth and do not require interpolation. They are also easily differentiable and can directly be used for control and optimization of PDEs. Finally, VarNet can straight-forwardly incorporate parametric PDE models making it a natural tool for model order reduction (MOR) of PDEs. We demonstrate the performance of our method through extensive numerical experiments for the advection-diffusion PDE as an important case-study.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 10

page 11

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Dynamical systems are typically modeled using differential equations that capture their input-output behavior. Particularly, for spatiotemporal systems the underlying dynamics are modeled using partial differential equations (PDEs) whose states live in infinite-dimensional spaces. Except for some special cases, only approximate solutions to such systems can be obtained using discretization-based numerical methods. These methods typically belong to one of three main categories, namely, finite difference (FD) methods, finite element (FE) methods, and finite volume (FV) methods [1]. The FD methods are often restricted to simple domains whereas the FE and FV methods can be applied to more general problems. These methods return solutions defined over a set of grid points that sample the spatiotemporal domain; the solution at any other points can be obtained by interpolating the nearby nodal values. This local treatment of the solution makes the models obtained from these methods extremely expressive but also very expensive to evaluate and store.

In this paper we follow a different approach that relies on deep neural networks (NNs) to capture the solutions of PDEs using trainable parameters that are considerably fewer than the number of grid points used in discretization-based methods. The proposed framework, called VarNet for variational neural networks, employs a novel loss function that relies on the variational (integral) form of PDEs and is discretization-free and highly parallelizable. Since, compared to the differential form of the PDE, the variational form contains lower order derivatives of the solution and considers segments of space-time as opposed to single points, the proposed loss function allows to learn solutions of PDEs using significantly fewer training points. Moreover, this loss function allows the NN to learn solutions of PDEs in a model-based and unsupervised way, where training of the NN parameters is guided by the PDE itself without a need for prior labeling. We also propose a way to optimally select the points needed to train the NN, based on the feedback provided from the residual of the PDE. Using NNs to approximate solutions of PDEs makes it trivial to solve these PDEs parametrically; these parameters need only be included as additional inputs to the NN. Consequently, a great strength of VarNet algorithm is that it can also be used as a powerful model order reduction (MOR) tool for PDEs, since the resulting models are very fast to evaluate.

I-a Relevant Literature

Using NNs to approximate solutions of PDEs can be beneficial for the following reasons: (i) their evaluation is extremely fast and thus, unlike currently available MOR methods, there is no need to compromise accuracy for speed, (ii) parallelization of the training is trivial, and (iii) the resulting model is smooth and differentiable and thus, it can be readily used in PDE-constrained optimization problems, e.g., for source identification [2] or control of PDEs [3]. The authors in [4, Ch. 04]

provide a review of different approaches for solving PDEs using NNs. One group of approaches utilize NNs to memorize the solution of PDEs. Particularly, they solve the PDE using a numerical method to obtain labeled training data and often utilize convolutional NNs (CNNs), being powerful image processing tools, to capture the numerical solution in a supervised learning way

[5]. For instance, [6] proposes a fluid dynamics solver that utilizes a CNN, consisting of encoding and decoding layers, to predict the velocity field in steady-state problems. It reports a stellar speedup of four orders-of-magnitude in computation time. The authors in [7] propose PDE-NET that is capable of identifying partial differential operators as well as approximating the solution of PDEs. Note that these approaches do not replace numerical methods but rather rely on them and introduce an extra layer of approximation.

Another class of works directly discretize the domain and construct a model based on this discretization. For instance, [8]

utilizes radial basis functions to construct a linear system of equations that is solved to obtain an approximate solution of the desired PDE. There exist also another group of methods called FE-NNs which represent the governing FE equations at the element level using artifical neurons

[9, 10, 11]. Note that both of these approaches scale with the number of discretization points and are similar in spirit to numerical methods like the FE method.

Most relevant to the approach proposed in this paper is a set of works that also directly train a NN to approximate the solution of PDEs in an unsupervised learning way. One of the early works of this kind is [12] that uses the residual of the PDE to define the required loss function. In order to remove the constraints from the training problem, the authors only consider simple domains for which the boundary conditions (BCs) can be manually enforced by a change of variables. The work in [13] elaborates more on this technique. Although these approaches attain comparable accuracy to numerical methods, they are impractical since in general enforcing the BCs is as difficult as solving the original PDE. Following a different approach, the work in [14] utilizes a constrained back-propagation algorithm to enforce the initial and boundary conditions during training. In order to avoid solving a constrained training problem, the authors in [15] add the constraints corresponding to BCs to the objective as penalty terms. Similarly, in [16]

the authors focus on the solution of PDEs with high dimensions using a long short-term memory architecture and prove a convergence result for the solution as the number of trainable parameters of the NN is increased. Although a large number of samples is required to train the model, this training data is sequentially provided to the NN. A similar approach is proposed in

[17] that utilizes the physical laws modeled by PDEs as regularizing terms to guide the learning process, while [18]

uses the PDEs to define energy fields that are minimized to train CNNs to predict the PDE solutions at discrete sets of points. Alternatively, reinforcement learning can be used to train NNs that approximate the solutions of PDEs. For instance, in

[19] the actor-critic algorithm is used where the PDE residual acts as the critic.

The literature discussed above uses the PDE residual as the loss function, which contains high order derivatives and trains over measure-zero points in space-time. Consequently, adequate learning of the PDE solution requires an extremely large number of training points. Moreover, these training points are selected arbitrarily and not based on their information content. Optimal selection of training points has been extensively studied in the context of active learning to alleviate the cost of labeling data in supervised learning. The work in [20] presents an extensive survey of this literature including various strategies to determine the most informative queries to be made to an oracle. Often samples with the highest ambiguity are preferred; this is closely related to informative planning methods in the robotics literature [21, 22] and optimal experiment design [23]. Here we propose an active learning method for unsupervised learning that relies on the feedback obtained from the PDE residual to optimally select the training points themselves instead of inquiring labels for them. This is closely related to mesh adaptivity methods in the FE literature where the underlying mesh is locally refined in regions with large error values [24].

I-B Contributions

The contributions of this paper can be summarized as follows: (i) Compared to methods that use NNs to approximate the solutions of PDEs, we propose a novel loss function that relies on the variational form of the PDE. The advantages of this loss function, compared to existing approaches that use the PDE residual, are two-fold. First, it contains lower order derivatives so that the solution of the PDE can be estimated more accurately. Note that it becomes progressively more difficult to estimate a function from its higher order derivatives since differential operators are agnostic to translation. Second, it utilizes the variational (integral) form of the PDE that considers segments of space-time as opposed to single points and imposes fewer smoothness requirements on the solution. Note that variational formulations have been successfully used in the FE method for a long time

[25]. (ii) We propose a new way to optimally select the points needed to train the NNs that is informed by the feedback obtained from the PDE residual. Our optimal sampling method considerably improves the efficiency of the training and the accuracy of the resulting models. (iii) We develop the VarNet library [26] that uses the proposed deep learning framework to solve PDEs. We present thorough numerical experiments using this library to demonstrate our proposed approach. Particularly, we focus on the advection-diffusion (AD) equation although the presented approach applies to any PDE that can be solved using the FE method. Note that discretization-based solutions to the AD-PDE often have stability issues for highly advective problems [27] which are magnified when the model is reduced. This is because this reduction typically amounts to loosing high frequency content of the original model. Through simulations we demonstrate that unlike traditional MOR methods, our approach does not suffer from such instability issues.

The proposed loss function was first introduced and used to solve a robotic PDE-constrained optimal control problem in our short paper [3]. Compared to [3], here we introduce the VarNet library that employs the same loss function but is additionally equipped with the proposed adaptive method to optimally select the NN training points and can be used not only to solve PDEs but also for MOR. Finally, compared to [3], this paper presents extensive numerical experiments to validate the VarNet algorithm.

The remainder of this paper is organized as follows. In Section II we discuss the AD-PDE and formulate the desired training problem. Section III is devoted to the solution of PDEs using NNs where we discuss the loss function, the optimal selection of the training points, and steps to solve an AD-PDE using the VarNet library. We present numerical experiments in Section IV and finally, Section V concludes the paper.

Ii Problem Definition

Ii-a Advection-Diffusion PDE

Let denote a domain of interest where is its dimension and let denote a location in this domain and

denote the time variable. Furthermore, consider a velocity vector field

and its corresponding diffusivity field . 111See [28] for more details on the relation between the velocity and diffusivity fields. Then, the transport of a quantity of interest , e.g., a chemical concentration, in this domain is described by the advection-diffusion (AD) PDE [29]

(1)

where denotes the time derivative of the concentration and is the time-dependent source field.

Given an appropriate initial condition (IC) and a set of boundary conditions (BCs), it can be shown that the AD-PDE (1) is well-posed and has a unique solution [30]. In this paper, we use the following IC and BCs 222Extending the results that follow to include more general BCs is straight-forward and is skipped here for simplicity of presentation.:

(2a)
(2b)

where for denote the boundaries of and and possess appropriate regularity conditions [30]. Equation (2a) describes the initial state of the field at time whereas equation (2b) prescribes the field value along the boundary as a time-varying function.

We refer to the parameters that appear in the AD-PDE (1), i.e., the velocity, diffusivity, and source fields as well as the IC and BCs, as the input data. Any of these fields can depend on secondary parameters which consequently means that the solution of the AD-PDE will depend on those secondary parameters. For instance, in source identification problems, a parametric solution of the AD-PDE is sought as the properties of the source term vary. This parametric solution is then used to solve a PDE-constrained optimization problem [2]. In general, secondary parameters can also appear in MOR of PDEs. In the following, we denote by the set of secondary parameters that the PDE input-data depend on, where is the set of feasible values.

[Alternative PDEs] In this paper we focus on the AD-PDE as an important case-study that models a wide range of physical phenomena from mass transport to population dynamics [31]. Nevertheless, the method developed in this paper applies to any other PDE that can be solved using the FE method.

Ii-B Training Problem

Discretization-based numerical methods, e.g., the FE method, can be used to approximate the solution of the AD-PDE (1) given an appropriate set of boundary-initial conditions (BICs) (2). As discussed earlier, these methods although very expressive, are computationally demanding to store and evaluate. Furthermore, it is not straightforward to obtain parametric solutions of PDEs using these methods. As a result, a wide range of MOR methods have been proposed to obtain such parametric solutions. These MOR methods typically rely on numerical methods to obtain the desired reduced models. Utilizing neural networks (NNs) to solve PDEs, allows us to directly obtain differentiable parametric solutions of PDEs that are computationally more efficient to evaluate and require considerably less memory to store.

Let denote the weights and biases of the NN, a total of trainable parameters. Then the solution of the AD-PDE can be approximated by the nonlinear output function of this NN, where maps the spatiotemporal coordinates and to the scalar output field of the PDE, given a value for the secondary parameters . Note that the input of the NN consists of the coordinates and as well as the parameters and its dimension is . See [32] for a detailed introduction to NNs. In the following, we drop the arguments and whenever they are not explicitly needed.

[Training Problem] Given the PDE input-data and the NN function with trainable parameters, find the optimal value such that

(3)

where denotes the true solution of the PDE. As discussed in Section I-A, Problem II-B can be solved in different ways. The simplest approach is to use supervised learning to obtain an approximate solution of the PDE using labeled data collected using numerical methods and a loss function defined by objective (3). In this approach, the NN acts merely as a memory cell and an interpolator to store the solution more efficiently than storing the values at all grid points. Note that this approach does not alleviate the need for discretization-based numerical methods but instead it relies on them. Alternative methods exist that train the NN in an unsupervised way. A typical approach is to consider a loss function defined by the PDE residual obtained when the approximate solution is substituted in (1). This approach has two problems. First, for the AD-PDE (1) it requires evaluation of second order derivatives of . Estimating a function from its higher order derivatives is inefficient since differentiation only retains slope information and is agnostic to translation. Second, training the differential form of the PDE amounts to learning a complicated field by only considering values at a limited set of points, i.e., a measure-zero set, ignoring correlations in space-time. In the following, we propose a different approach that addresses these issues by defining a loss function that relies on the variational form of the PDE.

Iii Neural Networks for Solution of PDEs

Iii-a Variational Loss Function

The goal of Problem II-B is to learn the parameters of the NN so that for all values of , the function approximates the solution of the AD-PDE (1) as well as possible. To capture this, we need to define a loss function that reflects how well the function approximates the solution of (1). To do so, we rely on the variational form of the AD-PDE (1). Particularly, let be an arbitrary compactly supported test function. Multiplying (1) by and integrating over spatial and temporal coordinates we get

Performing integration by parts for we have

Note that since is compactly supported over the temporal coordinate. Similarly,

where again the boundary terms vanish since is compactly supported. Finally, note that for velocities far below the speed of sound the incompressibility assumption holds and ; see [33]. Putting together all of these pieces, we get the variational form of the AD-PDE as

(4)

This variational form only requires the first-order spatial derivative and also an integration over a non-zero measure set as opposed to a single point. The test function acts as a weight on the PDE residual and the idea is that if (4) holds for a reasonable number of test functions with their compact supports located at different regions in space-time , the function has to satisfy the PDE. A very important feature of the test function is that it is compactly supported. This allows local treatment of the PDE as opposed to considering the whole space-time at once and is the basis of the FE method [25]. We discuss the explicit form of the test function in Appendix A-A.

Given the variational form (4), we can now define the desired loss function. Consider a set of test functions sampling the space-time , a set of points corresponding to the IC, and sets of points for the enforcement of the BCs. Then, for a given set of PDE input-data, specified by , we define the loss function as

(5)

where is defined by (4), stores the penalty weights corresponding to each term, and is the total number of training points for the BCs. Note that in the first term in (III-A), the integration is limited to the support of which is computationally very advantageous. In the next terms, normalizing by makes the weights independent of the number of training points.

Next, consider a set of points sampling the space of the secondary parameters. Then, integrating (III-A) over the secondary parameters we obtain the desired total loss function as

(6)

This loss function is lower-bounded by zero. Since this bound is attainable for the exact solution of the AD-PDE (1), the value of the loss function is an indicator of how well the NN approximates the solution of the PDE. Training using loss function (6) is an instance of unsupervised learning since the solution is not learned from labeled data. Instead, the training data here are unlabeled samples of space-time and the physics captured by the AD-PDE (1) guides learning of the parameters . In that sense, our approach is a model-based method as opposed to a merely statistical method that automatically extracts features and cannot be easily interpreted.

[Numerical Computation] The derivatives that appear in the variational form (4) can be evaluated in a variety of ways. Here we use the automatic differentiation method available in TensorFlow. Furthermore, to evaluate the integral over the support of the test functions, we utilize the Gauss-Legendre quadrature rule as is common in the FE literature [25]. See Appendix A-B for more details on the computation of the loss function (6).

Iii-B Optimal Training Points

As discussed in Section I-A, existing NN methods to solve PDEs use arbitrary training points. Instead, here we optimally select the training points over the space-time to expedite the training process and increase the accuracy of the resulting solutions. This is very similar to the idea of local adaptive mesh refinement in the FE literature [24]. Note that often the PDE solution varies more severely at certain regions of the space-time and adding more training points at those regions ensures that the NN adequately captures these variations. In order to prioritize more informative training points, we employ the residual field of the PDE. For a given set of input data specified by , plugging the NN function into the PDE (1), we obtain the PDE residual field as

(7)

Given this metric, we sample space-time using the rejection sampling algorithm [34] to generate training points that are proportionally more concentrated in regions with higher residual values. To this end, let denote the normalized metric defining a distribution over the space-time. Furthermore, let

denote a uniform distribution over the space-time from which we draw the candidate training points where

is a constant. Also, let be another constant such that over the space-time. Then, for a given candidate point

, the rejection sampling algorithm consists of drawing a random variable

and accepting the candidate point if Define Then, the acceptance condition simplifies to

(8)

Note that according to (8), candidate points that are closer to regions with higher PDE residual values have higher chances of being selected. We approximately determine by sampling the PDE residual over a uniform grid.

[Boundary-Initial Conditions] A similar procedure is utilized for optimal selection of training points for the BICs. Particularly, we replace metric (7) with the corresponding least-square error values between the prescribed IC and BCs and the NN output as given by the second and third terms in the loss function (III-A).

[Secondary Parameters] We take into account the effect of the secondary parameters in sampling from the space-time, by integrating the residual field over them. Note also that a similar approach can be used to also optimally select the samples of the secondary parameters.

Iii-C VarNet Deep Learning Library

The proposed VarNet algorithm combines the loss function (6) with the optimal selection method for the training points to approximate the solution of AD-PDEs.

0:  Space-time domain and PDE input data;
0:  Widths of MLP-NN layers;
0:  Number of training points ;
0:

  Number of training epochs

and weights in (III-A);
1:  Generate uniform training points over ;
2:  for  do
3:     for  do
4:        Update trainable parameters via an optimizer;
5:     end for
6:     if loss value has converged then
7:        Add optimal training points using (8);
8:     end if
9:  end for
Algorithm 1 VarNet Algorithm

The algorithm begins by requiring the properties of the spatial domain and the time horizon as well as the input data to the AD-PDE, i.e., the velocity and corresponding diffusivity fields, the source field, and the BICs, as a function of spatial and temporal coordinates and possibly secondary parameters . These inputs define an AD-PDE instance that is implemented in the VarNet library using the ADPDE(domain, diff, vel, source, tInterval, IC, BCs, MORvar) class where depending on the dimension the spatial domain , domain is defined by an instance of Domain1D or PolygonDomain2D class, and MORvar is an instance of MOR class for parametric problems. Next, the VarNet Algorithm 1

requires the width of the layers of the multi-layer perceptron (MLP) NN, used to capture the solution of the AD-PDE, as well as the number of training points

. This information defines the training problem as an instance of VarNet(layerWidth, discNum, bDiscNum, tDiscNum) class. Finally, the number of training epochs as well as the penalty weights for the terms appearing in the loss function (III-A) must be given.

Given the number of training points, in line 1, the algorithm generates a uniform grid over space-time, its boundaries, and possibly the space of secondary parameters . The training process begins in line 2 where in each epoch the algorithm iterates through all training data. The loop over the samples of the secondary parameters is performed in line 3. Training on a subset of data, corresponding to one value of at a time, amounts to the idea of batch-optimization and computationally facilitates the inclusion of arbitrarily large number of samples of the secondary parameters by breaking down the training points into smaller batches [32]. In line 4, the algorithm performs the optimization iteration for at epoch , updating the parameters of the NN for all training samples of the space-time and a given set of PDE input-data. A variety of optimization algorithms, included in TensorFlow, can be used for this purpose; see [32] for details. In line 6, the algorithm checks the convergence of the training process and in line 7 it adds new optimal training points over the space-time according to the procedure outlined in Section III-B. The number of these new training points is given as a fraction of the original numbers . The training process is performed in the member function VarNet.train(epochNum, weight, frac, batchNum).

The VarNet library [26] implements Algorithm 1 and contains additional functionalities including data parallelism and extensive tools for report generation during training and post-processing of the trained NNs. In Appendix A we present more details on the implementation of Algorithm 1. For more details on the VarNet library [26] and its functionalities refer to the code documentation.

[Convergence] By the universal approximation theorem, for a large enough number of trainable parameters , the NN can approximate any smooth function [35]. Given that the AD-PDE (1) has a unique smooth solution for an appropriate set of input data, the NN should converge to this solution when ; see [16] for details.

[Parallelization] Referring to Algorithm 1, parallelization of the training process is trivial. We can choose the number of samples in accordance with the available computational resources and decompose and assign the summations to different processing units.

Iv Numerical Experiments

In this section we present numerical experiments to demonstrate the performance of the VarNet Algorithm 1. Particularly, we study four different types of problems with an increasing order of difficulty: (i) 1D and 2D time-dependent AD problems with analytical solutions that are used as benchmarks in the FE literature. For highly advective cases with large Peclet numbers,333Peclet number is a measure of the relative dominancy of advection over diffusion and is defined as Pe = , where is a characteristic length of , is a characteristic velocity, and is the average diffusivity of the medium. a boundary layer is formed whose prediction becomes very challenging for discretization-based numerical methods. The objective here is to demonstrate the capability of our algorithm to solve problems that require very fine grids when solved by these methods. These two problems have constant input-data which often is the only case that is studied in the relevant literature. (ii) A 2D-AD problem with an analytical solution and non-constant input data. This is a fabricated problem where we find the source field that generates a desired concentration field given a set of input data. This is a standard method used to check the convergence of FE methods. (iii) A 2D-AD problem with compactly supported sources. In this case, we cannot obtain an analytical solution for the AD-PDE and only an approximation from numerical methods, e.g., the FE method, is available. The objective here is to examine the effect of localized source terms. (iv) A 2D-AD problem whose input data correspond to realistic mass transport phenomena in turbulent flows. In fluid dynamics applications often the input data are obtained for turbulent conditions and are non-constant. An analytical solution is not an option then. This is the most general case that one encounters in using the AD-PDE.

We solve all cases using the VarNet library [26] that was developed as part of this paper. This library is written in Python using the TensorFlow module; see Appendix A for details about its implementation and the code documentation [26] for more details on its capabilities. We use the ADAM stochastic optimization algorithm to train the NNs [36]. The VarNet Algorithm 1 is run on a single NVIDIA GeForce RTX 2080 Ti processor for each problem instance. In each case, we use a multi-layer perceptron (MLP) to capture the solution of the AD-PDE and unless otherwise specified, we use a sigmoidactivation function for all neurons. For a given uniform grid over the space-time, we compute the approximation error as

(9)

where the vectors stack the outputs of the NN and the exact solution evaluated at the grid points, respectively. We use an analytical solution or the FE approximate solution to compute the vector .444When the reported error values are biased against the NN solution due to, e.g., error in analytical solution, we add an asterisk to the reported value. To generate the required FE models, we use an in-house code based on the DiffPack library [37]. We set the fraction of the optimal training points in VarNet Algorithm 1 to and use short-hand notation ‘1Dt’ to denote a one-dimensional time-dependent AD-PDE and so on for other cases. When reporting the number of training points, we may only give . The number of training points for BICs is then deduced from . For instance if for a 2Dt problem, then the first entry refers to discretization of temporal coordinate and the last two entries refer to discretization of spatial coordinates and thus, and , where and is the length of boundary .

Iv-a 1Dt AD-PDE with Analytical Solution

In this section we study the performance of the VarNet Algorithm 1 for the benchmark problem presented in [38]. This problem is defined for as

(10)

Similar to [38], we fix the velocity and study the performance of our approach for two diffusivity values . For the latter value, the analytical solution becomes numerically unstable.

Figure 1 shows the solution, provided by the VarNet Algorithm 1 for overlaid on the analytical solution. The final error (9) is err .

(a)
(b)
(c)
Fig. 1: Snapshots of the NN solution to 1Dt AD problem (IV-A) for diffusivity .

We use a single layer MLP with sigmoid activation, neurons in the layer, training points, and set the weights to . Note that the number of temporal training points should be sufficiently large to ensure that the dimensionless Courant number defined as

(11)

is upper-bounded and guarantee the stability of the solution across time [39]. An animation of the approximate solution for the case of is compared to the analytical solution in [40]. Note that the solution is continuous both in space and time and does not require interpolation to generate this video.

As mentioned earlier, one of the important contributions of this paper is the novel loss function that is based on the variational form of the AD-PDE as opposed to its differential form. To demonstrate the effectiveness of our loss function, we solve the case of using the PDE residual (7) as well; see also the discussion pursuant to Problem II-B. Using the same architecture and settings with training points, which is equivalent to the number of integration points used above, the final error is err . After a set of simulations, the best result corresponding to and has an error of err . An animation of the approximate solution for this case is compared to the analytical solution in [41]. Observe that there is considerable error in capturing the solution and its BICs when the PDE residual is used, although the number of training points is considerably larger.

Next, we consider the case of which corresponds to an order of magnitude higher Peclet number and is much more challenging to solve. Figure 2 shows the snapshots of the solution from a MLP with two layers with neurons in the layers amounting to trainable parameters. We train the NN using training points and set the weights to .

Fig. 2: Snapshots of the NN solution to 1Dt AD problem (IV-A) for diffusivity overlaid on exact solution in select points shown by the dots.

As mentioned earlier, for the analytical solution is numerically unstable and we only compare to a set of point values reported in [38]. The final error comparing to these point values is err* . Note that this error value is conservative since it only considers the boundary layer, which is the most challenging to approximate, as opposed to the whole space-time. Discretization-based methods have difficulty capturing the boundary layer at and require an extremely fine grid and consequently a large model [38], whereas our approach can capture the solution with only trainable parameters. An animation of the approximate solution for this case is given in [42].

Figure 3 shows the evolution of the individual terms and total value of the loss function (III-A), the evolution of the PDE residual (7), and the corresponding final residual snapshots for the case of .

(a) components and total value of loss function (III-A)
(b) PDE residual (7) convergence
(c) final residual snapshots
Fig. 3: Loss function (III-A), PDE residual (7) history, and final residual values for the 1Dt AD problem (IV-A) with diffusivity .

Referring to Figure 2(a), note that the penalty weights on the BICs in (III-A) must be selected as high as possible without making the variational term insignificant. Using small weights on BICs will result in inaccurate solutions since the AD-PDE does not have a unique solution without properly enforced BICs. For a properly selected set of weights , values of the BIC terms in the loss function (III-A) drop considerably fast to small values while the value of the variational term does not. This amounts to an increase in the PDE residual (7) at the early training epochs, as can be seen from Figure 2(b). Once the BICs are approximately enforced, the contribution of the BIC terms in the loss function (III-A) decreases and the variational term starts to drop and the output of the NN converges to the unique solution of the 1Dt AD-PDE (IV-A); this amounts to a drop in the PDE residual at the later epochs; see Figure 2(b). The sharp increase around epoch corresponds to the addition of the optimal training points. Furthermore, from Figure 2(c) it can be observed that the value of residual is considerably small across space-time except for which corresponds to the boundary layer.

Table I shows the performance of the VarNet Algorithm 1, in solving the AD problem (IV-A) with diffusivity , for different activation functions and network capacities where the number of training points and the weights are tuned for the best performance in each case.

No. activation optimal err*
1 tanh no 0.83
2 sigmoid no 0.49
3 sigmoid no 0.44
4 sigmoid yes 0.42
5 sigmoid scaled 0.39
6 sigmoid no 0.08
7 sigmoid yes 0.09
8 sigmoid no 0.09
9 sigmoid no 0.08
TABLE I: Error values (9) for the 1Dt AD problem (IV-A) with diffusivity , calculated for the set of point values reported in [38].

Specifically, we examine three MLP structures with neurons in the layers. As mentioned before, and can be deduced from and are not reported. Comparing the first two cases, observe that tanh activation is incapable of capturing the solution. This behavior is observed in other occasions as well and thus, in the rest of simulations we only use sigmoid activation. The next three cases study the effect of using optimal training points, as proposed in Section III-B, against uniform selection of the training points. More specifically, case 3 reports the error values for number of training points that is equivalent to addition of optimal points in cases 4 and 5. In case 5, the support of test functions used at optimal training points is scaled down by a factor of 10 at each dimension decreasing the error further. This is motivated by the argument in Section III-B that the optimal training points should be placed in regions of high variability. Decreasing the support allows the NN to better capture the curvature at the boundary layer. The white dots in Figure 4 show the optimally selected training points for this case overlaid on the PDE residual field (7).

Fig. 4: Training points for case 5 in Table I corresponding to the 1Dt AD problem (IV-A) with overlaid on the PDE residual field (7). The green dots show the uniform training points across the space-time whereas the white dots show the optimally selected points according to the discussion of Section III-B.

Note that as expected, most of the optimal training points are placed close to where the boundary layer is formed. This local refinement decreases the final error without having to uniformly increase the number of training points across space-time. Further increasing the number of trainable parameters and the number of training points dramatically decreases the error as it can be seen in case 6. Note however that increasing without increasing unstabilizes the training process. Also the increase in the number of training points across temporal and spatial coordinates needs to be done proportionally to ensure that the Courant number (11) stays upper-bounded. Comparing case 7 to case 6, observe that once the number of training points is increased enough so that the whole space-time is being adequately sampled, addition of optimal training points is not beneficial. Finally considering cases 9 and 8, observe that further increase of the network capacity does not seem to decrease the error. Referring to Figure 2 note that the error values reported in Table I effectively capture the error in boundary layer . To capture this layer even better, the number of spatial samples and thus, to maintain the Courant number, the number of temporal samples need to be considerably increased.

Finally, we study the performance of the VarNet algorithm 1 for MOR. Specifically, we train NNs that parametrically solve the AD-PDE (IV-A) as a function of diffusivity for , which amounts to a range of more than one order of magnitude. We use geometric samples for training. Table II shows the final errors for diffusivity values other than the ones used for training, and for different numbers of training points.

No. err
1 0.33*
2 0.40*
3 0.14
4 0.09*
5 0.15*
6 0.00
TABLE II: Error values (9) for the parametric solution of the 1Dt AD problem (IV-A) as a function of diffusivity .

For the two smaller values of diffusivity, the analytical solution is unstable and only the point values reported in [38] are compared which results in biased (larger) error values for these cases. Note that the parametric solutions are as good as the ones reported for individual diffusivity values above, meaning that MOR has no major effect in the solution predicted by the NNs. This is in contrast to traditional MOR methods that often introduce a large error in the estimation. Note also that the VarNet Algorithm 1 is able to capture the solution of the AD-PDE (IV-A) by only trainable parameters. The continuous, parametric representation of the solution provides valuable tools for analysis of dynamical systems. For instance, we can study the sensitivity of the solution of the AD problem (IV-A) to the change of diffusivity of the medium. Figure 5 shows the solution snapshots at as a function of .

Fig. 5: The parametric solution of the 1Dt AD problem (IV-A) as a function of diffusivity at .

Note that as decreases, the peak value is better preserved and carried downstream with less diffusion and as a result, the boundary layer becomes sharper.

Iv-B 2Dt AD-PDE with Analytical Solution

In this section we consider a 2D time-dependent AD problem with an analytical solution presented in [43]; this problem is used as a benchmark to evaluate the FE methods proposed in [44]. Specifically, a constant, one-directional flow with is considered and we adjust the constant diffusivity to study the effect of the Peclet number on the solution. We consider a convex 2D domain where we assign a constant, non-zero Dirichlet condition to boundary 2 on the left side of the domain, i.e., ; see Figure 6.

Fig. 6: 2D convex domain for 2Dt AD problem. The numbers in parentheses denote the index of the boundary segments. The constant concentration on the boundary segment (2) is transported downstream by advection and diffusion.

In other words, the concentration on the boundary is transported downstream via advection and in the transverse direction via diffusion and at any given time , a step-like concentration front exists at that travels with a speed of . At there is no concentration in the domain and we assume the boundaries are far enough so that they stay concentration-free for the whole time, i.e., in (2) we set and . Noting that the value of is immaterial as long as it is set large enough, we use .

In Table III we report the error values compared to the analytical solution provided in [43] for different diffusivity values where we set weights .

No. optimal err
1 no 0.24
2 no 0.20
3 yes 0.16
4 scaled 0.18
5 no 0.15
6 no 0.04
7 no 0.07*
TABLE III: Error values (9) for the 2Dt AD problem for different constant diffusivity values.

Comparing the first 6 cases, observe that generally simultaneous addition of more training points and increasing of the capacity of the NN, i.e., the number of the trainable parameters , lead to a decrease in the error value. Comparing cases 2, 3, and 4, it can be seen that using optimal training points results in a smaller error than a uniform grid with the same number of training points. Note that since in this problem, unlike the 1Dt AD problem (IV-A), the high variations are not concentrated at the boundary, scaling the support of optimal test functions in case 4 seems to have an adverse effect on the final error. Case 7 shows the performance of the proposed method when the Peclet number is increased by three orders of magnitude. Note that this increase has no effect in the performance of the VarNet Algorithm 1. This is in contrast to FE methods which are known to become unstable for higher Peclet numbers [45]. Figure 7 shows the snapshots of the concentration field corresponding to case 7. Note the traveling step-like concentration front.

(a)
(b)
(c)
Fig. 7: Snapshots of the NN solution to the 2Dt AD problem for diffusivity .

An animation of the solution filed can be found in [46]. Figure 8 shows the exact and approximate concentration values along the longitudinal profile for ; see also Figure 6.

Fig. 8: Exact and approximate concentration corresponding to the 2Dt AD-PDE along the longitudinal profile for .

Note that the approximate concentration accurately matches the analytical solution of [43]. In fact the analytical solution itself is inaccurate close to boundary at meaning that the error err* is biased in case 7 of Table III.

Iv-C 2D AD-PDE with Analytical Solution and Field Input-Data

The problems that we have studied so far have constant input data, i.e., the diffusivity and velocity fields as well as the BICs are constant and the source function is set to zero. Often, this is the only case studied in the relevant literature. When the PDE input-data are not constant, it is often impossible to obtain analytical solutions. In this section we construct a problem with non-constant input data and an analytical solution. Specifically, we start with concentration, diffusivity, and velocity fields, and use the AD-PDE (1) to derive a corresponding source that will create this desired steady-state concentration field. Specifically, we consider a domain with the following concentration field

(12)

where the integer adjusts the number of zero-crossing of the trigonometric function in the field. The larger , the more complicated the resulting field is. Plugging this field into the AD-PDE (1), we obtain the corresponding source field that will generate the concentration field (12) for a given set of PDE input-data. Note that this concentration field attains a value of zero on the boundaries so we use zero-valued Dirichlet BCs in (2b). Since the problem is in steady state, the IC (2a) is irrelevant here.

In Table IV, we report the performance of the VarNet Algorithm 1 in solving this problem instance where we set the weights for the reported results.

No. err
1 0.03
2 0.04
3 0.03
4 0.00
5 field (13b) 0.04
6 field (13a) field (13b) 0.03
TABLE IV: Error values (9) for the 2D steady-state AD problem with analytical solution (12).

Note that since the AD PDEs studied here and the following sections are time-independent, in (III-A) is trivially set to zero and is not reported. In the first four cases in Table IV, and we use constant diffusivity and velocity fields, as reported in columns four and five. A single layer NN with 20 neurons can easily capture the concentration field (12) in these cases regardless of the Peclet number. Comparing the first two cases, observe that increasing the number of training points does not improve the error. The reason could be that the current network capacity is insufficient to better capture the concentration field (12). In cases 5 and 6, we use which results in the concentration field depicted in Figure 8(a). More specifically, in case 5 we use a non-constant velocity field and in case 6 we also use a non-constant diffusivity field. These two fields are defined as

(13a)
(13b)

where is the maximum diffusivity value in the domain which we set to . These two fields are depicted in Figures 8(b) and 8(c), respectively. Figure 8(d) shows the resulting source field corresponding to case 6.

(a) exact concentration field (12)
(b) diffusivity field (13a)
(c) velocity magnitude field (13b)
(d) source field
Fig. 9: PDE input-data for the 2D AD problem with analytical fields. Specifically, these fields correspond to case 6 in Table IV.

The VarNet Algorithm 1 successfully solves these two cases using only a single layer with trainable parameters.

Iv-D 2D AD-PDE with Compactly Supported Source

The source field used in the previous section is not compactly supported; see Figure 8(d). Often in practice, the source term in AD-PDE (1) is compactly supported and occupies a very small part of the domain. Existence of such source terms translates to a more localized and nonlinear concentration field that is more challenging to capture. To investigate this point further, in this section we solve the AD-PDE (1) over a convex domain with constant input date , , and a set of Dirichlet BCs for two localized sources. Specifically, we first consider a Gaussian source given by

where is the source intensity, is the source center and is its characteristic length. This source technically is not compactly supported but for practical purposes it decays to almost zero beyond distance from the center . Because of its smoothness, the corresponding concentration field is easier to solve for. We also consider a compactly supported tower source function given by

(14)

where delineates the support and is set to for the following results. For consistency, we define a characteristic length for this source as the side-length of its support. We adjust the source intensity for both sources to ensure that the resultant maximum concentration levels are close to unit. Note that for these arbitrary sources, we cannot solve the AD-PDE (1) analytically. Instead, we use the FE solution as the ground-truth to calculate the error values according to (9).

In Table V, we report the performance of the VarNet Algorithm 1 in solving the AD-PDE (1) for these localized sources.

No. source err
1 Gaussian 0.00
2 Gaussian 0.02
3 Gaussian 0.02
4 tower 0.17
TABLE V: Error values (9) for the 2D AD problem with compactly supported source fields.

Comparing the first three cases, observe that for smooth Gaussian sources, our proposed algorithm is capable of obtaining solutions identical to the FE method with a single layer MLP with 20 neurons. We set the weights to for this case. Nevertheless, the problem is more challenging when compactly supported tower source is used. Using a larger network with three layers amounting to trainable parameters, training points, and setting the weights to , the algorithm solves the problem with err . Figure 10 shows the NN and FE approximations of the concentration field corresponding to case 4.

(a) NN solution
(b) FE solution
Fig. 10: NN and FE approximations of the concentration field for compactly-supported tower source corresponding to case 4 in Table V.

Observe that the general shape of the concentration field is recovered almost exactly but the peak value is not. Obviously, using larger network capacities and number of training points would decrease the error.

Iv-E 2D AD-PDE with Turbulent Input Data

So far in the simulations, we have focused on simple flow fields that do not often occur in realistic problems. For instance, when AD-PDE (1) is used to model mass transport in ‘air’, even in very small velocities the resulting flow field is turbulent [22], causing an excessive amount of mixing due to instantaneous changes in velocity vector. This often is captured with an equivalent turbulent diffusivity that is added to the laminar diffusivity of the medium; see [2] for more details. In the following, we consider velocity and diffusivity fields that are obtained using computational fluid dynamics (CFD) methods via ANSYS-Fluent. Specifically consider the problem that we studied in Section IV-B with the domain depicted in Figure 6 but in steady-state. We assume that air flows into this domain through boundary 2 with a velocity of 1m/s and leaves the domain through the opposite side. The corresponding velocity magnitude and diffusivity fields are given in Figure 11.

(a) velocity magnitude field (m/s)
(b) diffusivity field (m/s)
Fig. 11: PDE input-data fields corresponding to the turbulent flow in Section IV-E obtained using CFD techniques and ANSYS-Fluent.

Similar to Section IV-B, we assume a constant Dirichlet BC on boundary 2, i.e., we set .555The concentration unit is arbitrary. The rest of the BCs are set to zero.

First, we study the case of fixed diffusivity field by setting and using the non-constant velocity field of Figure 10(a). For this case, we use a MLP with three layers and neurons in the layers which amounts to trainable parameters. We set and use training points. The approximation error for this case is err . Figure 12 shows the approximate solutions obtained from VarNet Algorithm 1 and the FE method, respectively.

(a) NN solution
(b) FE solution
Fig. 12: NN and FE solutions for the AD problem of Section IV-E with constant diffusivity field and turbulent velocity field depicted in Figure 10(a).

Figure 13 shows the loss value as a function of training epochs.

Fig. 13: Loss function (III-A) as a function of training epochs for the AD problem of Section IV-E with constant diffusivity and turbulent velocity fields.

Note that for a large number of epochs the iterations get trapped in a local minimum after an initial sharp decrease and then start to drop again. In fact when the maximum number of epochs is reached, the solution is still improving. As more realistic, non-constant fields are used, the training process becomes more challenging since the loss function is more nonlinear. In such cases, larger number of epochs might be necessary to ensure convergence to an acceptable local minimum. Regular shuffling of the training data can also help with escaping such local minima.

Next, we consider the non-constant diffusivity field given in Figure 10(b). This results in an even more nonlinear loss function. To solve the AD-PDE (1) in this case, we use a MLP with three layers and neurons in the layers which amounts to trainable parameters. We set and use training points. The resulting approximation error for this case is err . Note that similar to Figure 10 in Section IV-D, the shape of the concentration field is reconstructed here but the peak value is inaccurate. Throughout the simulations, we have used very small MLP networks with small capacities. Using larger networks along with larger number of training points and training for longer periods will further improve the solutions. This is particularly true for this case since the loss curves obviously show that the NN is still improving when the maximum number of epochs is reached.

V Conclusion

In this paper we proposed a new model-based unsupervised learning method, called VarNet , for the solution of partial differential equations (PDEs) using deep neural networks (NNs). Particularly, we proposed a novel loss function that relies on the variational (integral) form of PDEs as apposed to their differential form which is commonly used in the literature. Our loss function is discretization-free, highly parallelizable, and more effective in capturing the solution of PDEs since it employs lower-order derivatives and trains over measure non-zero regions of space-time. Given this loss function, we also proposed an approach to optimally select the space-time samples, used to train the NN, that is based on the feedback provided from the PDE residual. The models obtained using our algorithm are smooth and do not require interpolation. They are also easily differentiable and can directly be used for control and optimization of PDEs. Finally, the VarNet algorithm can straight-forwardly incorporate parametric PDE models making it a natural tool for model order reduction of PDEs. We demonstrated the performance of our method through extensive numerical experiments for the advection-diffusion PDE as an important case-study.

Appendix A Variational Loss Function

A-a Test Functions

As discussed in Section III-A, the only requirement for the test function is that it must be compactly supported. Most of the FE shape functions satisfy this requirement and can be used with our proposed algorithm; see [25]. Often in FE schemes standard elements are defined in the so called mathematical domain and then we can map these elements to the actual elements in the physical domain under isotropy assumption. In this paper we use a simple 1D standard element with linear shape functions as the building block to construct the desired test functions. Particularly, let denote a generalized coordinate in the mathematical domain, that can be mapped to a spatial or temporal coordinate in the physical domain later on, and define the linear basis functions given by

(15)

Figure 14 shows the variation of these two shape functions along the length of the 1D element.

Fig. 14: Linear shape functions for the 1D element.

Given this 1D element, corresponding standard, higher-dimensional elements can easily be constructed. Let denote the desired dimension of these elements where for time-dependent problems and otherwise. Furthermore, let denote the coordinate along the -th dimension, denote the corresponding vector of coordinates, and the multi-index specify one of the corners of the higher-dimensional element where . Given this multi-index, the shape function corresponding to the respective corner of the element is defined by the multiplication of the 1D shape functions (15) as

(16)

For simplicity of presentation, let the augmented coordinate be defined as for time-dependent problems and , otherwise. Then, we denote the shape functions in the physical domain by . Assuming isotropic elements, we can write the coordinates in the physical domain in terms of the coordinates in the mathematical domain as

(17)

In words, given the coordinates in the mathematical domain, the corresponding -th coordinate in the physical domain is given as a convex combination of the -th coordinates of the corners of the element, specified by the multi-index . This mapping from mathematical to physical domain, implicitly defines the shape functions as

Without loss of generality, we use cubical elements so that this mapping amounts to simple scalings of the coordinates where . Note that this simple choice of elements considerably facilitates the implementation of the VarNet Algorithm 1. Nevertheless, the type, shape, and size of the elements are arbitrary and can be as general as the ones used in different FE schemes [25].

Given the above shape functions, we are ready to define the test functions that we use in the loss function (III-A). Specifically, given a training point in the space-time, we consider a test function centered at that point that belongs to a set of elements that share the point. Then, we define the desired test function as over each of these elements where the multi-index corresponds to the shared corner of that element. As an example, the test function for the 1D (physical) domain corresponding to Figure 14, centered at the training point with element scaling of , is depicted in Figure 15.

Fig. 15: A test function for the 1D physical domain. The support of the function is compact and contains two 1D elements with length that share the training point located at .

Over element 1, the test function is defined as whereas over element 2, it is defined as . Note that the support of is obtained by patching together elements 1 and 2 and is compact. During training, we arbitrarily select the location of the test functions over the space-time while fixing the shape and size of the elements. As mentioned earlier, this assumption can be relaxed at the expense of more computational cost.

A-B Numerical Computation of Loss Function

In order to compute the loss function (III-A), we need to compute the variational form (4) which in turn requires the calculation of the derivatives of the test function . To this end, we first derive the derivatives of the shape function for one of the elements within the support of . Then, the desired derivatives of are obtained from the derivatives of these shape functions over each element. The decomposed form of the shape function (16) makes its differentiation straight-forward. Specifically, using (15) for coordinate in the mathematical domain we have

(18)

where minus sign corresponds to . Let the matrix collect these derivatives where each column corresponds to one shape function, specified by the multi-index , and is equal to the gradient whose entries are given by (18). Furthermore, define the Jacobian matrix whose column is the gradient of the coordinate with respect to the mathematical coordinates. Let also the matrix collect in its column , the -th coordinate of the corners of the element in the order specified by the multi-index . Then, we have

This expression can be checked by direct differentiation of (17). For the cubical elements that we use in the physical domain, this Jacobian matrix is given explicitly by

(19)

where denotes the scaling along coordinate from mathematical to physical domain. Finally, let the matrix collect the desired derivatives , with respect to the coordinates , in its columns. Then, we have

The FE class in the VarNet tool [26] computes the test functions, presented in the previous section, as well as their derivatives as discussed above.

The standard elements, defined in the mathematical domain, make the numerical computation of the variational form (4) and the loss function (III-A) very efficient. This is because the Gauss-Legendre quadrature rules can be used for integration over these elements. Specifically, let denote the number of integration points for the 1D element introduced in Section A-A, denote the values of the coordinate at these integration points for , and denote the corresponding weights for a 1D -point quadrature rule. For instance for , we have and . Let denote the integrand in (4). Given the 1D integration point information, we can approximate the variational form of the PDE (4) by a -dimensional quadrature rule as

where denotes the determinant of the Jacobian matrix (19) which is constant for the cubical elements used here, denotes the compact support of the test function , and for . For small element sizes, the integrand can accurately be estimated by low order polynomials making the numerical integration very accurate. Finally, note that the integrand involves the PDE input data that need to be evaluated at the integration points. Since these integration points are defined in the mathematical coordinates, we map them to the physical domain using (17).

References