Physics-informed attention-based neural network for solving non-linear partial differential equations

by   Ruben Rodriguez Torrado, et al.

Physics-Informed Neural Networks (PINNs) have enabled significant improvements in modelling physical processes described by partial differential equations (PDEs). PINNs are based on simple architectures, and learn the behavior of complex physical systems by optimizing the network parameters to minimize the residual of the underlying PDE. Current network architectures share some of the limitations of classical numerical discretization schemes when applied to non-linear differential equations in continuum mechanics. A paradigmatic example is the solution of hyperbolic conservation laws that develop highly localized nonlinear shock waves. Learning solutions of PDEs with dominant hyperbolic character is a challenge for current PINN approaches, which rely, like most grid-based numerical schemes, on adding artificial dissipation. Here, we address the fundamental question of which network architectures are best suited to learn the complex behavior of non-linear PDEs. We focus on network architecture rather than on residual regularization. Our new methodology, called Physics-Informed Attention-based Neural Networks, (PIANNs), is a combination of recurrent neural networks and attention mechanisms. The attention mechanism adapts the behavior of the deep neural network to the non-linear features of the solution, and break the current limitations of PINNs. We find that PIANNs effectively capture the shock front in a hyperbolic model problem, and are capable of providing high-quality solutions inside and beyond the training set.



There are no comments yet.


page 7


Distributed physics informed neural network for data-efficient solution to partial differential equations

The physics informed neural network (PINN) is evolving as a viable metho...

Physics-Informed Neural Networks with Adaptive Localized Artificial Viscosity

Physics-informed Neural Network (PINN) is a promising tool that has been...

PINNs for the Solution of the Hyperbolic Buckley-Leverett Problem with a Non-convex Flux Function

The displacement of two immiscible fluids is a common problem in fluid f...

ExaHyPE: An Engine for Parallel Dynamically Adaptive Simulations of Wave Problems

ExaHyPE ("An Exascale Hyperbolic PDE Engine") is a software engine for s...

Non-linear Independent Dual System (NIDS) for Discretization-independent Surrogate Modeling over Complex Geometries

Numerical solutions of partial differential equations (PDEs) require exp...

Deep Learning-based Schemes for Singularly Perturbed Convection-Diffusion Problems

Deep learning-based numerical schemes such as Physically Informed Neural...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) have achieved enormous success in recent years because they have significantly expanded the scope of possible tasks that they can perform, given sufficiently large datasets [1]

. The range of applications is extraordinary, from natural language processing 

[2, 3], image analysis [4] and autonomous driving [5], to earthquake forecasting [6], playing videogames [7, 8] and, more recently, numerical differentiation [9].

Neural networks can approximate the solution of differential equations [10, 11], in particular high-dimensional partial differential equations (PDEs) [12, 13]. One of the most remarkable approaches to solve non-linear PDEs is physics-informed neural networks (PINNs) [14, 15]

. PINNs are trained to solve supervised learning tasks constrained by PDEs, such as the conservation laws in continuum theories of fluid and solid mechanics 

[16, 17, 18, 11, 19]. The idea behind PINNs is to train the network using automatic differentiation (AD) by calculating and minimizing the residual, usually constrained by initial and boundary conditions, and possibly observed data [14]. PINNs have the potential to serve as on-demand, efficient simulators for physical processes described by differential equations (the forward problem) [14, 20, 18]. If trained accurately, PINNs can work faster and more accurately than numerical simulators of complex real-world phenomena. PINNs may also be used to assimilate data and observations into numerical models, or be used in parameter identification (the inverse problem) [14, 21] and uncertainty quantification [22, 23, 24, 25].

Learning solutions of nonlinear PDEs using current network architectures presents some of the same limitations of classical numerical discretization schemes. A paradigmatic example is the solution of hyperbolic PDEs. Hyperbolic conservation laws describe a plethora of physical systems in gas dynamics, acoustics, elastodynamics, optics, geophysics, and biomechanics [26]. Hyperbolic PDEs are challenging to solve numerically using classical discretization schemes, because they tend to form self-sharpening, highly-localized, nonlinear shock waves that require specific approximation strategies and fine meshes [27]. On the other hand, the ability of current PINNs to learn PDEs with a dominant hyperbolic character relies on adding artificial dissipation [28, 29, 30, 31], or on using a priori knowledge to increase the number of training points along the shock trajectories [22].

In this work, we propose a new perspective on solving nonlinear PDEs using deep learning, by focusing on network architectures rather than on residual regularization. We explore two core ideas: 1) A modified PINN architecture can provide a more general method for solving hyperbolic conservation law problems without a priori knowledge or residual regularization. 2) Relating network architecture with the physics encapsulated in a given PDE is possible and has a beneficial impact. Our hypothesis is that sophisticated, physics-specific network architectures (e.g. networks whose internal hierarchy is related to the physical processes being learned) may be more effectively trained and understood than standard feed-forward multilayer perceptrons.

The proposed new architecture, inspired by recent advances in deep learning for language processing and translation [32, 33], is a combination of general recurrent units (GRUs) and attention mechanisms; we call this a physics-informed attention-based neural network (PIANN). The combination of both elements in the architecture allows for determination of the most relevant information (recurrent neural network with memory) to adapt the behavior of the deep neural network to approximate sharp shocks without the necessity of residual regularization or a priori knowledge (attention mechanism). We use a classical hyperbolic model problem (the Buckley-Leverett equation [34, 27]) as a benchmark for hyperbolic conservation laws to test our methodology, and find that PIANNs effectively capture the shock front and are capable of providing high quality solutions inside and beyond the training set.

2 Problem formulation

The problem of interest is that of two immiscible fluids (oil and water) flowing through a porous medium (sand). The Buckley-Leverett (BL) equation [34] describes the evolution in time and space of the wetting-phase (water) saturation. Let be the solution of the BL equation


where usually represents the wetting-phase saturation, is the fractional flow function and is the mobility ratio of the two fluid phases.

This first-order hyperbolic equation is of interest as its solution can display both smooth solutions (rarefactions) and sharp fronts (shocks). Although the solution to this problem can be calculated analytically, the precise and stable resolution of these shocks poses well-known challenges for numerical methods [27].

Physics-Informed Neural Networks (PINNs) have been tested on this problem by Fuks and Tchelepi [28] who report good performance for concave fractional flow functions. The solution of the problem in the case of a non-concave fractional flow function is, however, much more challenging and remains an open problem. We take to be the non-concave flux function


for which we can obtain the analytical solution of the problem:


where represents the shock location defined by the Rankine-Hugoniot condition [27].

3 Methodology

Let be a discrete version of the domain of

. We define our PIANN as a vector function

, where

are the weights of the network to be estimated during training. The inputs for the proposed architecture are pairs of

and the output is a vector where the -th component is the solution evaluated in . Notice the different treatment applied to spatial and temporal coordinates. Whereas is a variable of the vector function , the locations where we calculated the solution are fixed in advance. The output is a saturation map and therefore its values have to be in the interval .

For the sake of readability, we introduce the architecture of in section 4. However, we advance that in order to enforce the boundary condition, we let our PIANN learn only the components and then we concatenate the component . To enforce the initial conditions, we set . To enforce that the solution be in the interval

, a sigmoid activation function is applied to each component of the last layer of our PIANN.

The parameters of the PIANN are estimated according to the physics-informed learning approach, which states that can be estimated from the BL equation eq. (1), the initial conditions eq. (2) and boundary conditions eq. (3), or in other words, no examples of the solution are needed to train a PINN.

After utilizing the information provided by the initial and boundary conditions enforcing and

, respectively, we now define a loss function based on the information provided by eq. (

1). To calculate the first term we propose two options. The first option is a central finite difference approximation, that is,


Alternatively, we can calculate the derivative of our PIANN with respect to since we know the functional form of

. It can be calculated using the automatic differentiation tools included in many machine learning libraries, such as Pytorch. Thus, we propose a second option to calculate this term as


The second term of eq. (1), the derivative of the flux with respect to the spatial coordinate, is approximated using central finite difference as


where the vector of fluxes at the -th location is calculated as


The spatial coordinate is included as a fixed parameter in our architecture.

The loss function to estimate the parameters of the PINN is given as


where is the Fröbenius norm.

It should be noted that unlike all previous physics-informed learning works in the literature (recall section 1), the initial and boundary conditions are not included in the loss function; they are already enforced in the architecture. This has three direct consequences. First, we are enforcing a stronger constraint that does not allow any error on the initial and boundary conditions. Second, the PIANN does not need to learn these conditions by itself, and it can concentrate only on learning the parameters that minimize the residuals of the BL equation. Third, since we only have the term of the residuals, there are no weights to be tuned to control the effect of the initial and boundary conditions in the final solution.

Finally, the parameters of the PIANN are estimated using ADAM optimizer [35] to minimize eq. (9) with respect to .

4 PIANN architecture

Although it has been demonstrated that neural networks are universal function approximators, certain challenging problems (e.g. solving non-linear PDEs) may require more specific architectures to capture all their properties. For that reason, we have proposed a new architecture, inspired by [32], to solve non-linear PDEs with discontinuities under two assumptions.

First, to automatically detect discontinuities we need an architecture that can exploit the correlations between the values of the solution for all spatial locations . Second, the architecture has to be flexible enough to capture different behaviors of the solution at different regions of the domain. To this end, we propose the use of encoder-decoder GRUs [2] for predicting the solution at all locations at once, with the use of a recent machine learning tool known as attention mechanisms [33].

Our approach presents several advantages compared to traditional simulators: i) Instead of using just neighboring cells’ information to calculate as in numerical methods, our architecture uses the complete encoded sequence input of the grid to obtain , allowing us to capture non-local relationships that numerical methods struggle to identify. ii) the computer time for the forward pass of neural networks models is linear with respect to the number of cells in our grid. In other words, our method is a faster alternative with respect to traditional methods of solving PDEs.

Figure 1 shows an outline of the proposed architecture. We start feeding the input pair to a single fully connected layer. Thus, we obtain the initial hidden state of a sequence of GRU blocks (yellow). Each of them corresponds to a spatial coordinate which is combined with the previous hidden state inside the block.

This generates a set of vectors which can be understood as a representation of the input in a latent space. The definitive solution (we omit the subindex for simplicity) is reached after a new sequence of GRU blocks (blue) whose initial hidden state is initialized as to preserve the memory of the system.

In addition to the hidden state , the -th block is fed with a concatenation of the solution at the previous location and a context vector, that is


How the context vector is obtained is one of the key aspects of our architecture, since it will provide the PINN with enough flexibility to fit to the different behaviors of the solution depending on the region of the domain. Inspired by [32], we introduce an attention mechanism between both GRU block sequences. Our attention mechanism is a single fully connected layer, , that learns the relationship between each component of and the hidden states of the (blue) GRU sequence,


Then, the rows of matrix are normalized using a softmax function as


and the context vectors are calculated as


The coefficients can be understood as the degree of influence of the component in the output . This is one of the main innovations of our work to solve hyperbolic equations with discontinuities. The attention mechanism automatically determines the most relevant encoded information of the full sequence of the input data to predict the . In other words, attention mechanism is a new method that allows one to determine the location of the shock automatically and provide more accurate behavior of the PIANN model around this location. This new methodology breaks the limitations explored by other authors [36] [28] [29] since is is able to capture the discontinuity without specific prior information or the regularization term of the residual.

This is the first paper to use attention mechanisms to solve non-linear PDEs for hyperbolic problems with discontinuity.

Figure 1: Architecture of physical attention neural network for the prediction of the variable

5 Results

In this section we perform a set of experiments that support the proposed methodology. The goal of our experiments is to demonstrate that our PIANN is indeed able to approximate the analytical solution given in eq. (5).

The training set is given by a grid , and a set of values of , which produces , , and a total of 257,550 points. We want to emphasize that no examples of the solution are known at these points, and therefore no expensive and slow simulators are required to build the training set. To estimate the parameters of the PIANN we minimize eq. (9) by running ADAM optimizer for epochs with a learning rate of .

Figure 2: Residual values for each epoch for values in a semilog scale.

Figure 2 shows the residual value for the testing dataset for the different epoch for . We can observe a fast convergence of the method and a cumulative value of the residual smaller than after a few epochs. This demonstrates that we are minimizing the residuals in eq. (1) and subsequently solving the the equation that governs BL.

Figure 3 shows the comparison between the analytical solution (red) and the solution obtained by our PIANN (blue) for different used during training. Top, middle and bottom rows correspond to , and , respectively, and the columns from left to right, correspond to different time steps , , and , respectively. We can distinguish three regions of interest in the solution. Following increasing x coordinates, the first region on the left is one where the water saturation varies smoothly following the rarefaction part of the solution. The second region is a sharp saturation change that corresponds to the shock in the solution and the third region is ahead of the shock, with undisturbed water saturation values that are still at zero. For all cases, we observe that the PIANN properly learns the correct rarefaction behavior of the first region and approximates the analytical solution extremely well. In the third region, the PIANN also fits to the analytical solution perfectly and displays an undisturbed water saturation at zero. As for any classical numerical methods, the shock region is the most challenging to resolve.

Figure 3: Top and bottom rows correspond to and and for attention weights map and comparison of the predicted by the neural network and the exact solutions of the PDE, respectively. The columns from left to right, correspond to different time steps , and

Around the shock, the PIANN seems unable to perfectly resolve a sharp front, and the represented behavior is a shock that is smoothed over and displays non-monotonic artifacts upstream and downstream of the front. The location of the shock is, however, well captured. Such a behavior is reminiscent of the behavior observed in higher-order finite difference methods, where slope-limiters are often used to correct for the non-physical oscillations.

Figure 4: Comparison between solutions obtained with residuals evaluated using central and upwind finite differences, for .

Importantly, it means the PIANN needs to learn where the shock is located in order to fit differently to both sides of it. This is the role of the attention mechanism of our architecture. On top of each figure we have visualized the attention map introduced by [32] for every timestep. These maps visualize the attention weight to predict the variable . We observe that in all cases the attention mechanism identifies the discontinuity, water front, and subsequently modifies the behavior of the network in the three different regions described above. This shows that attention mechanism provides all the necessary information to capture discontinuities automatically without the necessity of training data or a prior knowledge. Finally, it is important to note that attention weights of the attention mechanism are constant when the shock/discontinuity disappears.

In addition, we test the behavior of our methodology to provide solutions for BL at points within the training range of : and

. In other words, we want to check the capability of our PIANN model to interpolate solutions. Figure

5 shows that our PIANNs provide solutions that correctly detect the shock.

Figure 5: Top and bottom rows correspond to and for comparison of the predicted by the neural network and the exact solutions of the PDE, respectively. The columns from left to right, correspond to different time steps , and

We also test the PIANN to extrapolate solutions out of the range of the training set: , and . Figure 6 shows a degradation of the results when is far from the original training set. We observe that our method predicts the behavior of the shock for . However, the shock is totally missed for and as such, retraining the model is recommended with higher values of . It is important to note that the results show that our method is stable and converges for the different cases.

Figure 6: Top and bottom rows correspond to and and comparison of the predicted by the neural network and the exact solutions of the PDE, respectively. The columns from left to right, correspond to different time steps , and

We test how the neural residual error progresses based on different and resolutions. Results are shown in table 1, and demonstrate that our PIANN obtains smaller residuals when the resolution of the training set increases. However, we observe that changes in the residual are not highly significant. This is an advantage with respect to traditional numerical methods such as CFD, where smaller values of are necessary to capture the shock and guarantee convergence and stability.

Finally, we have compared the results with central and upwind finite difference schemes for the term of the vector of fluxes. The first-order upwind difference introduces a dissipation error when applied to the residual of the Buckley-Leverett equation, which is equivalent to regularizing the problem via artificial diffusion. Figure 4 shows that both approaches present similar results respect to the analytical solution. The fact that both central and upwind differences yield similar predictions is important, because it suggests that the proposed PIANN approach does not rely on artificial dissipation for shock capturing.

6 Discussion

In this work, we have introduced a new method to solve hyperbolic PDEs. We propose a new perspective by focusing on network architectures rather than on residual regularization. We call our new architecture a physics informed attention neural network (PIANN).

PIANN’s novel architecture is based on two assumptions. First, correlations between values of the solution at all the spatial locations must be exploited, and second, the architecture has to be flexible enough to identify the shock and capture different behaviors of the solution at different regions of the domain. We have proposed an encoder-decoder GRU-based network to use the most relevant information of the fully encoded information, combined with the use of an attention mechanism. The attention mechanism is responsible for identifying the shock location and adapting the behavior of the PIANN model.

Unlike previous methods in the literature, the loss function of PIANNs is based solely on the residuals of the PDE, and the initial and boundary conditions are introduced in the architecture. These are stronger constraints than the ones enforced by previous methods, since we do not allow room for learning error on the initial or boundary conditions. As a result, PIANN’s training aims only at minimizing the residual of the PDE; no hyperparameters are needed to control the effect of initial and boundary conditions on the solution.

We have applied the proposed methodology to the non-concave flux Buckley-Leverett problem, which has hitherto been an open problem for PINNs. The experimental results support the validity of the proposed methodology and conclude that: i) during training, the residuals of the equation decrease quickly to values smaller than , which means that our methodology is indeed solving the differential equation, ii) the attention mechanism automatically detects shock waves of the solution and allows the PIANN to fit to the different behaviors of the analytical solution, and iii) the PIANN is able to interpolate solutions for values of the mobility ratio inside the range of training set, as well as to extrapolate when the value of is outside the range. However, we observe that if is too far away from range of the training set, the quality of the solution decreases. In that case, a retraining of the network is recommended. iv) We observe that the residuals decrease when the resolution of the training set increases. However, the change in the residuals is not highly significant. This is advantageous with respect to traditional numerical methods where small values of are needed to capture the shock and guarantee convergence and stability.

In conclusion, the proposed methodology is not confined by the current limitations of deep learning for solving hyperbolic PDEs with shock waves, and opens the door to applying these techniques to real-world problems, such as challenging reservoir simulations or carbon sequestration. It is plausible that this method could be applied to model many processes in other domains which are described by non-linear PDEs with shock waves.

Resolution Residual Error
Table 1: Residual calculation for different resolution of and for