Neuroevolution Surpasses Stochastic Gradient Descent for Physics-Informed Neural Networks

12/15/2022
by   Nicholas Sung Wei Yong, et al.
0

The potential of learned models for fundamental scientific research and discovery is drawing increasing attention. Physics-informed neural networks (PINNs), where the loss function directly embeds governing equations of scientific phenomena, is one of the key techniques at the forefront of recent advances. These models are typically trained using stochastic gradient descent, akin to their standard deep learning counterparts. However, in this paper, we carry out a simple analysis showing that the loss functions arising in PINNs lead to a high degree of complexity and ruggedness that may not be conducive for gradient-descent and its variants. It is therefore clear that the use of neuro-evolutionary algorithms as alternatives to gradient descent for PINNs may be a better choice. Our claim is strongly supported herein by benchmark problems and baseline results demonstrating that convergence rates achieved by neuroevolution can indeed surpass that of gradient descent for PINN training. Furthermore, implementing neuroevolution with JAX leads to orders of magnitude speedup relative to standard implementations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset