
Physicinformed NeuralNetwork Software for Molecular Dynamics Applications
We have developed a novel differential equation solver software called P...
read it

A reduced order Schwarz method for nonlinear multiscale elliptic equations based on twolayer neural networks
Neural networks are powerful tools for approximating high dimensional da...
read it

Uniform convergence of an upwind discontinuous Galerkin method for solving scaled discreteordinate radiative transfer equations with isotropic scattering kernel
We present an error analysis for the discontinuous Galerkin method appli...
read it

Efficient training of physicsinformed neural networks via importance sampling
PhysicsInformed Neural Networks (PINNs) are a class of deep neural netw...
read it

PhysicsInformed Neural Network for Modelling the Thermochemical Curing Process of CompositeTool Systems During Manufacture
We present a PhysicsInformed Neural Network (PINN) to simulate the ther...
read it

Deep learning based on mixedvariable physics informed neural network for solving fluid dynamics without simulation data
Deep learning method has attracted tremendous attention to handle fluid ...
read it

A Discontinuity Capturing Shallow Neural Network for Elliptic Interface Problems
In this paper, a new Discontinuity Capturing Shallow Neural Network (DCS...
read it
Solving multiscale steady radiative transfer equation using neural networks with uniform stability
This paper concerns solving the steady radiative transfer equation with diffusive scaling, using the physics informed neural networks (PINNs). The idea of PINNs is to minimize a leastsquare loss function, that consists of the residual from the governing equation, the mismatch from the boundary conditions, and other physical constraints such as conservation. It is advantageous of being flexible and easy to execute, and brings the potential for high dimensional problems. Nevertheless, due the presence of small scales, the vanilla PINNs can be extremely unstable for solving multiscale steady transfer equations. In this paper, we propose a new formulation of the loss based on the macromicro decomposition. We prove that, the new loss function is uniformly stable with respect to the small Knudsen number in the sense that the L^2error of the neural network solution is uniformly controlled by the loss. When the boundary condition is anisotropic, a boundary layer emerges in the diffusion limit and therefore brings an additional difficulty in training the neural network. To resolve this issue, we include a boundary layer corrector that carries over the sharp transition part of the solution and leaves the rest easy to be approximated. The effectiveness of the new methodology is demonstrated in extensive numerical examples.
READ FULL TEXT
Comments
There are no comments yet.