Accelerated Training of Physics Informed Neural Networks (PINNs) using Meshless Discretizations

05/19/2022
by   Ramansh Sharma, et al.
7

We present a new technique for the accelerated training of physics-informed neural networks (PINNs): discretely-trained PINNs (DT-PINNs). The repeated computation of partial derivative terms in the PINN loss functions via automatic differentiation during training is known to be computationally expensive, especially for higher-order derivatives. DT-PINNs are trained by replacing these exact spatial derivatives with high-order accurate numerical discretizations computed using meshless radial basis function-finite differences (RBF-FD) and applied via sparse-matrix vector multiplication. The use of RBF-FD allows for DT-PINNs to be trained even on point cloud samples placed on irregular domain geometries. Additionally, though traditional PINNs (vanilla-PINNs) are typically stored and trained in 32-bit floating-point (fp32) on the GPU, we show that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. We demonstrate the efficiency and accuracy of DT-PINNs via a series of experiments. First, we explore the effect of network depth on both numerical and automatic differentiation of a neural network with random weights and show that RBF-FD approximations of third-order accuracy and above are more efficient while being sufficiently accurate. We then compare the DT-PINNs to vanilla-PINNs on both linear and nonlinear Poisson equations and show that DT-PINNs achieve similar losses with 2-4x faster training times on a consumer GPU. Finally, we also demonstrate that similar results can be obtained for the PINN solution to the heat equation (a space-time problem) by discretizing the spatial derivatives using RBF-FD and using automatic differentiation for the temporal derivative. Our results show that fp64 DT-PINNs offer a superior cost-accuracy profile to fp32 vanilla-PINNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

FO-PINNs: A First-Order formulation for Physics Informed Neural Networks

We present FO-PINNs, physics-informed neural networks that are trained u...
research
10/02/2022

High Precision Differentiation Techniques for Data-Driven Solution of Nonlinear PDEs by Physics-Informed Neural Networks

Time-dependent Partial Differential Equations with given initial conditi...
research
02/26/2023

Efficient physics-informed neural networks using hash encoding

Physics-informed neural networks (PINNs) have attracted a lot of attenti...
research
07/08/2021

GPU accelerated RBF-FD solution of Poisson's equation

The Radial Basis Function-generated finite differences became a popular ...
research
12/12/2017

Enhancing approximation abilities of neural networks by training derivatives

Method for increasing precision of feedforward networks is presented. Wi...
research
11/21/2022

A Curriculum-Training-Based Strategy for Distributing Collocation Points during Physics-Informed Neural Network Training

Physics-informed Neural Networks (PINNs) often have, in their loss funct...
research
10/29/2021

CAN-PINN: A Fast Physics-Informed Neural Network Based on Coupled-Automatic-Numerical Differentiation Method

In this study, novel physics-informed neural network (PINN) methods for ...

Please sign up or login with your details

Forgot password? Click here to reset