Deep neural networks (DNNs) have achieved enormous success in recent years because they have significantly expanded the scope of possible tasks that they can perform, given sufficiently large datasets 
. The range of applications is extraordinary, from natural language processing[2, 3], image analysis  and autonomous driving , to earthquake forecasting , playing videogames [7, 8] and, more recently, numerical differentiation .
Neural networks can approximate the solution of differential equations [10, 11], in particular high-dimensional partial differential equations (PDEs) [12, 13]. One of the most remarkable approaches to solve non-linear PDEs is physics-informed neural networks (PINNs) [14, 15]
. PINNs are trained to solve supervised learning tasks constrained by PDEs, such as the conservation laws in continuum theories of fluid and solid mechanics[16, 17, 18, 11, 19]. The idea behind PINNs is to train the network using automatic differentiation (AD) by calculating and minimizing the residual, usually constrained by initial and boundary conditions, and possibly observed data . PINNs have the potential to serve as on-demand, efficient simulators for physical processes described by differential equations (the forward problem) [14, 20, 18]. If trained accurately, PINNs can work faster and more accurately than numerical simulators of complex real-world phenomena. PINNs may also be used to assimilate data and observations into numerical models, or be used in parameter identification (the inverse problem) [14, 21] and uncertainty quantification [22, 23, 24, 25].
Learning solutions of nonlinear PDEs using current network architectures presents some of the same limitations of classical numerical discretization schemes. A paradigmatic example is the solution of hyperbolic PDEs. Hyperbolic conservation laws describe a plethora of physical systems in gas dynamics, acoustics, elastodynamics, optics, geophysics, and biomechanics . Hyperbolic PDEs are challenging to solve numerically using classical discretization schemes, because they tend to form self-sharpening, highly-localized, nonlinear shock waves that require specific approximation strategies and fine meshes . On the other hand, the ability of current PINNs to learn PDEs with a dominant hyperbolic character relies on adding artificial dissipation [28, 29, 30, 31], or on using a priori knowledge to increase the number of training points along the shock trajectories .
In this work, we propose a new perspective on solving nonlinear PDEs using deep learning, by focusing on network architectures rather than on residual regularization. We explore two core ideas: 1) A modified PINN architecture can provide a more general method for solving hyperbolic conservation law problems without a priori knowledge or residual regularization. 2) Relating network architecture with the physics encapsulated in a given PDE is possible and has a beneficial impact. Our hypothesis is that sophisticated, physics-specific network architectures (e.g. networks whose internal hierarchy is related to the physical processes being learned) may be more effectively trained and understood than standard feed-forward multilayer perceptrons.
The proposed new architecture, inspired by recent advances in deep learning for language processing and translation [32, 33], is a combination of general recurrent units (GRUs) and attention mechanisms; we call this a physics-informed attention-based neural network (PIANN). The combination of both elements in the architecture allows for determination of the most relevant information (recurrent neural network with memory) to adapt the behavior of the deep neural network to approximate sharp shocks without the necessity of residual regularization or a priori knowledge (attention mechanism). We use a classical hyperbolic model problem (the Buckley-Leverett equation [34, 27]) as a benchmark for hyperbolic conservation laws to test our methodology, and find that PIANNs effectively capture the shock front and are capable of providing high quality solutions inside and beyond the training set.
2 Problem formulation
The problem of interest is that of two immiscible fluids (oil and water) flowing through a porous medium (sand). The Buckley-Leverett (BL) equation  describes the evolution in time and space of the wetting-phase (water) saturation. Let be the solution of the BL equation
where usually represents the wetting-phase saturation, is the fractional flow function and is the mobility ratio of the two fluid phases.
This first-order hyperbolic equation is of interest as its solution can display both smooth solutions (rarefactions) and sharp fronts (shocks). Although the solution to this problem can be calculated analytically, the precise and stable resolution of these shocks poses well-known challenges for numerical methods .
Physics-Informed Neural Networks (PINNs) have been tested on this problem by Fuks and Tchelepi  who report good performance for concave fractional flow functions. The solution of the problem in the case of a non-concave fractional flow function is, however, much more challenging and remains an open problem. We take to be the non-concave flux function
for which we can obtain the analytical solution of the problem:
where represents the shock location defined by the Rankine-Hugoniot condition .
Let be a discrete version of the domain of
. We define our PIANN as a vector function, where
are the weights of the network to be estimated during training. The inputs for the proposed architecture are pairs ofand the output is a vector where the -th component is the solution evaluated in . Notice the different treatment applied to spatial and temporal coordinates. Whereas is a variable of the vector function , the locations where we calculated the solution are fixed in advance. The output is a saturation map and therefore its values have to be in the interval .
For the sake of readability, we introduce the architecture of in section 4. However, we advance that in order to enforce the boundary condition, we let our PIANN learn only the components and then we concatenate the component . To enforce the initial conditions, we set . To enforce that the solution be in the interval
, a sigmoid activation function is applied to each component of the last layer of our PIANN.
The parameters of the PIANN are estimated according to the physics-informed learning approach, which states that can be estimated from the BL equation eq. (1), the initial conditions eq. (2) and boundary conditions eq. (3), or in other words, no examples of the solution are needed to train a PINN.
After utilizing the information provided by the initial and boundary conditions enforcing and
, respectively, we now define a loss function based on the information provided by eq. (1). To calculate the first term we propose two options. The first option is a central finite difference approximation, that is,
Alternatively, we can calculate the derivative of our PIANN with respect to since we know the functional form of.
The second term of eq. (1), the derivative of the flux with respect to the spatial coordinate, is approximated using central finite difference as
where the vector of fluxes at the -th location is calculated as
The spatial coordinate is included as a fixed parameter in our architecture.
The loss function to estimate the parameters of the PINN is given as
where is the Fröbenius norm.
It should be noted that unlike all previous physics-informed learning works in the literature (recall section 1), the initial and boundary conditions are not included in the loss function; they are already enforced in the architecture. This has three direct consequences. First, we are enforcing a stronger constraint that does not allow any error on the initial and boundary conditions. Second, the PIANN does not need to learn these conditions by itself, and it can concentrate only on learning the parameters that minimize the residuals of the BL equation. Third, since we only have the term of the residuals, there are no weights to be tuned to control the effect of the initial and boundary conditions in the final solution.
4 PIANN architecture
Although it has been demonstrated that neural networks are universal function approximators, certain challenging problems (e.g. solving non-linear PDEs) may require more specific architectures to capture all their properties. For that reason, we have proposed a new architecture, inspired by , to solve non-linear PDEs with discontinuities under two assumptions.
First, to automatically detect discontinuities we need an architecture that can exploit the correlations between the values of the solution for all spatial locations . Second, the architecture has to be flexible enough to capture different behaviors of the solution at different regions of the domain. To this end, we propose the use of encoder-decoder GRUs  for predicting the solution at all locations at once, with the use of a recent machine learning tool known as attention mechanisms .
Our approach presents several advantages compared to traditional simulators: i) Instead of using just neighboring cells’ information to calculate as in numerical methods, our architecture uses the complete encoded sequence input of the grid to obtain , allowing us to capture non-local relationships that numerical methods struggle to identify. ii) the computer time for the forward pass of neural networks models is linear with respect to the number of cells in our grid. In other words, our method is a faster alternative with respect to traditional methods of solving PDEs.
Figure 1 shows an outline of the proposed architecture. We start feeding the input pair to a single fully connected layer. Thus, we obtain the initial hidden state of a sequence of GRU blocks (yellow). Each of them corresponds to a spatial coordinate which is combined with the previous hidden state inside the block.
This generates a set of vectors which can be understood as a representation of the input in a latent space. The definitive solution (we omit the subindex for simplicity) is reached after a new sequence of GRU blocks (blue) whose initial hidden state is initialized as to preserve the memory of the system.
In addition to the hidden state , the -th block is fed with a concatenation of the solution at the previous location and a context vector, that is
How the context vector is obtained is one of the key aspects of our architecture, since it will provide the PINN with enough flexibility to fit to the different behaviors of the solution depending on the region of the domain. Inspired by , we introduce an attention mechanism between both GRU block sequences. Our attention mechanism is a single fully connected layer, , that learns the relationship between each component of and the hidden states of the (blue) GRU sequence,
Then, the rows of matrix are normalized using a softmax function as
and the context vectors are calculated as
The coefficients can be understood as the degree of influence of the component in the output . This is one of the main innovations of our work to solve hyperbolic equations with discontinuities. The attention mechanism automatically determines the most relevant encoded information of the full sequence of the input data to predict the . In other words, attention mechanism is a new method that allows one to determine the location of the shock automatically and provide more accurate behavior of the PIANN model around this location. This new methodology breaks the limitations explored by other authors    since is is able to capture the discontinuity without specific prior information or the regularization term of the residual.
This is the first paper to use attention mechanisms to solve non-linear PDEs for hyperbolic problems with discontinuity.
In this section we perform a set of experiments that support the proposed methodology. The goal of our experiments is to demonstrate that our PIANN is indeed able to approximate the analytical solution given in eq. (5).
The training set is given by a grid , and a set of values of , which produces , , and a total of 257,550 points. We want to emphasize that no examples of the solution are known at these points, and therefore no expensive and slow simulators are required to build the training set. To estimate the parameters of the PIANN we minimize eq. (9) by running ADAM optimizer for epochs with a learning rate of .
Figure 2 shows the residual value for the testing dataset for the different epoch for . We can observe a fast convergence of the method and a cumulative value of the residual smaller than after a few epochs. This demonstrates that we are minimizing the residuals in eq. (1) and subsequently solving the the equation that governs BL.
Figure 3 shows the comparison between the analytical solution (red) and the solution obtained by our PIANN (blue) for different used during training. Top, middle and bottom rows correspond to , and , respectively, and the columns from left to right, correspond to different time steps , , and , respectively. We can distinguish three regions of interest in the solution. Following increasing x coordinates, the first region on the left is one where the water saturation varies smoothly following the rarefaction part of the solution. The second region is a sharp saturation change that corresponds to the shock in the solution and the third region is ahead of the shock, with undisturbed water saturation values that are still at zero. For all cases, we observe that the PIANN properly learns the correct rarefaction behavior of the first region and approximates the analytical solution extremely well. In the third region, the PIANN also fits to the analytical solution perfectly and displays an undisturbed water saturation at zero. As for any classical numerical methods, the shock region is the most challenging to resolve.
Around the shock, the PIANN seems unable to perfectly resolve a sharp front, and the represented behavior is a shock that is smoothed over and displays non-monotonic artifacts upstream and downstream of the front. The location of the shock is, however, well captured. Such a behavior is reminiscent of the behavior observed in higher-order finite difference methods, where slope-limiters are often used to correct for the non-physical oscillations.
Importantly, it means the PIANN needs to learn where the shock is located in order to fit differently to both sides of it. This is the role of the attention mechanism of our architecture. On top of each figure we have visualized the attention map introduced by  for every timestep. These maps visualize the attention weight to predict the variable . We observe that in all cases the attention mechanism identifies the discontinuity, water front, and subsequently modifies the behavior of the network in the three different regions described above. This shows that attention mechanism provides all the necessary information to capture discontinuities automatically without the necessity of training data or a prior knowledge. Finally, it is important to note that attention weights of the attention mechanism are constant when the shock/discontinuity disappears.
In addition, we test the behavior of our methodology to provide solutions for BL at points within the training range of : and
. In other words, we want to check the capability of our PIANN model to interpolate solutions. Figure5 shows that our PIANNs provide solutions that correctly detect the shock.
We also test the PIANN to extrapolate solutions out of the range of the training set: , and . Figure 6 shows a degradation of the results when is far from the original training set. We observe that our method predicts the behavior of the shock for . However, the shock is totally missed for and as such, retraining the model is recommended with higher values of . It is important to note that the results show that our method is stable and converges for the different cases.
We test how the neural residual error progresses based on different and resolutions. Results are shown in table 1, and demonstrate that our PIANN obtains smaller residuals when the resolution of the training set increases. However, we observe that changes in the residual are not highly significant. This is an advantage with respect to traditional numerical methods such as CFD, where smaller values of are necessary to capture the shock and guarantee convergence and stability.
Finally, we have compared the results with central and upwind finite difference schemes for the term of the vector of fluxes. The first-order upwind difference introduces a dissipation error when applied to the residual of the Buckley-Leverett equation, which is equivalent to regularizing the problem via artificial diffusion. Figure 4 shows that both approaches present similar results respect to the analytical solution. The fact that both central and upwind differences yield similar predictions is important, because it suggests that the proposed PIANN approach does not rely on artificial dissipation for shock capturing.
In this work, we have introduced a new method to solve hyperbolic PDEs. We propose a new perspective by focusing on network architectures rather than on residual regularization. We call our new architecture a physics informed attention neural network (PIANN).
PIANN’s novel architecture is based on two assumptions. First, correlations between values of the solution at all the spatial locations must be exploited, and second, the architecture has to be flexible enough to identify the shock and capture different behaviors of the solution at different regions of the domain. We have proposed an encoder-decoder GRU-based network to use the most relevant information of the fully encoded information, combined with the use of an attention mechanism. The attention mechanism is responsible for identifying the shock location and adapting the behavior of the PIANN model.
Unlike previous methods in the literature, the loss function of PIANNs is based solely on the residuals of the PDE, and the initial and boundary conditions are introduced in the architecture. These are stronger constraints than the ones enforced by previous methods, since we do not allow room for learning error on the initial or boundary conditions. As a result, PIANN’s training aims only at minimizing the residual of the PDE; no hyperparameters are needed to control the effect of initial and boundary conditions on the solution.
We have applied the proposed methodology to the non-concave flux Buckley-Leverett problem, which has hitherto been an open problem for PINNs. The experimental results support the validity of the proposed methodology and conclude that: i) during training, the residuals of the equation decrease quickly to values smaller than , which means that our methodology is indeed solving the differential equation, ii) the attention mechanism automatically detects shock waves of the solution and allows the PIANN to fit to the different behaviors of the analytical solution, and iii) the PIANN is able to interpolate solutions for values of the mobility ratio inside the range of training set, as well as to extrapolate when the value of is outside the range. However, we observe that if is too far away from range of the training set, the quality of the solution decreases. In that case, a retraining of the network is recommended. iv) We observe that the residuals decrease when the resolution of the training set increases. However, the change in the residuals is not highly significant. This is advantageous with respect to traditional numerical methods where small values of are needed to capture the shock and guarantee convergence and stability.
In conclusion, the proposed methodology is not confined by the current limitations of deep learning for solving hyperbolic PDEs with shock waves, and opens the door to applying these techniques to real-world problems, such as challenging reservoir simulations or carbon sequestration. It is plausible that this method could be applied to model many processes in other domains which are described by non-linear PDEs with shock waves.
Terrence J. Sejnowski.
The unreasonable effectiveness of deep learning in artificial intelligence.PNAS, 117:30033––30038, 2020.
-  Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
-  Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014.
-  D Jude Hemanth and V Vieira Estrela. Deep learning for image processing applications, volume 31. IOS Press, 2017.
-  Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3):362–386, 2020.
-  Paul A Johnson, Bertrand Rouet-Leduc, Laura J Pyrak-Nolte, Gregory C Beroza, Chris J Marone, Claudia Hulbert, Addison Howard, Philipp Singer, Dmitry Gordeev, Dimosthenis Karaflos, et al. Laboratory earthquake forecasting: A machine learning competition. PNAS, 118:e2011362118, 2021.
-  Ruben Rodriguez Torrado, Ahmed Khalifa, Michael Cerny Green, Niels Justesen, Sebastian Risi, and Julian Togelius. Bootstrapping conditional gans for video game level generation. In 2020 IEEE Conference on Games (CoG), pages 41–48. IEEE, 2020.
Ruben Rodriguez Torrado, Philip Bontrager, Julian Togelius, Jialin Liu, and
Deep reinforcement learning for general video game ai.In 2018 IEEE Conference on Computational Intelligence and Games (CIG), pages 1–8. IEEE, 2018.
-  Yohai Bar-Sinai, Stephan Hoyer, Jason Hickey, and Michael P. Brenner. Learning data-driven discretizations for partial differential equations. PNAS, 116:15344–15349, 2019.
-  F. Regazzoni, L. Dedé, and A. Quarteroni. Machine learning for fast and reliable solution of time-dependent differential equations. Journal of Computational Physics, 397:108852, 2019.
-  E. Samaniego, C. Anitescu, S. Goswami, V. M. Nguyen-Thanh, H. Guo, K. Hamdia, X. Zhuang, and T. Rabczuk. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering, 362:112790, 2020.
-  Jiequn Han, Arnulf Jentzen, and Weinan E. Solving high-dimensional partial differential equations using deep learning. PNAS, 115:8505––8510, 2018.
-  Christian Beck, Martin Hutzenthaler, Arnulf Jentzen, and Benno Kuckuck. An overview on deep learning-based approximation methods for partial differential equations. arXiv preprint arXiv:2012.12348, 2020.
-  Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
-  Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017.
-  Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367:1026–1030, 2020.
-  Steven L. Brunton, Bernd R. Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics. Annual Review of Fluid Mechanics, 52:477–508, 2020.
-  Teeratorn Kadeethum, Thomas M Jørgensen, and Hamidreza M Nick. Physics-informed neural networks for solving nonlinear diffusivity and biot’s equations. PloS one, 15(5):e0232683, 2020.
-  Ehsan Haghighat, Maziar Raissi, Adrian Moure, Hector Gomez, and Ruben Juanes. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Appl. Mech. Engrg., 379:113741, 2021.
-  Xiaowei Jin, Shengze Cai, Hui Li, and George Em Karniadakis. Nsfnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible navier-stokes equations. Journal of Computational Physics, 426:109951, 2021.
-  Liu Yang, Xuhui Meng, and George Em Karniadakis. B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data. Journal of Computational Physics, 425:109913, 2021.
-  Zhiping Mao, Ameya D. Jagtap, and George Em Karniadakis. Physics-informed neural networks for high-speed flows. Computer Methods in Applied Mechanics and Engineering, 360:112789, 2020.
-  Ameya D. Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Computer Methods in Applied Mechanics and Engineering, 365:113028, 2020.
-  Dongkun Zhang, Lu Lu, Ling Guo, and George E. Karniadakis. Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. Journal of Computational Physics, 397:108850, 2019.
Ramakrishna Tipireddy, David A. Barajas-Solano, and Alexandre M. Tartakovsky.
Conditional karhunen-loéve expansion for uncertainty quantification and active learning in partial differential equation models.Journal of Computational Physics, 418:109604, 2020.
-  Constantine M. Dafermos. Hyperbolic Conservation Laws in Continuum Physics. Berlin. Springer-Verlag, 2000.
-  Randall J. Leveque. Numerical Methods for Conservation Laws (2. ed.). Lectures in Mathematics: ETH Zurich. Birkäuser, 1992.
-  Olga Fuks and Hamdi A Tchelepi. Limitations of physics informed machine learning for nonlinear two-phase transport in porous media. Journal of Machine Learning for Modeling and Computing, 1(1), 2020.
-  Cedric G Fraces, Adrien Papaioannou, and Hamdi Tchelepi. Physics informed deep learning for transport in porous media. buckley leverett problem. arXiv preprint arXiv:2001.05172, 2020.
-  Cedric G. Fraces and Hamdi Tchelepi. Physics informed deep learning for flow and transport in porous media. arXiv preprint arXiv:2104.02629, 2021.
-  Craig Michoski, Milos Milosavljevic, Todd Oliver, and David R. Hatch. Solving differential equations using deep neural networks. Neurocomputing, 399:193–212, 2020.
-  Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
-  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
-  S.E. Buckley and M.C. Leverett. Mechanism of fluid displacement in sands. Trans. AIME, 241:107–116, 1942.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Zhiping Mao, Ameya D Jagtap, and George Em Karniadakis. Physics-informed neural networks for high-speed flows. Computer Methods in Applied Mechanics and Engineering, 360:112789, 2020.