A connection between probability, physics and neural networks

09/26/2022
by   Sascha Ranftl, et al.
0

We illustrate an approach that can be exploited for constructing neural networks which a priori obey physical laws. We start with a simple single-layer neural network (NN) but refrain from choosing the activation functions yet. Under certain conditions and in the infinite-width limit, we may apply the central limit theorem, upon which the NN output becomes Gaussian. We may then investigate and manipulate the limit network by falling back on Gaussian process (GP) theory. It is observed that linear operators acting upon a GP again yield a GP. This also holds true for differential operators defining differential equations and describing physical laws. If we demand the GP, or equivalently the limit network, to obey the physical law, then this yields an equation for the covariance function or kernel of the GP, whose solution equivalently constrains the model to obey the physical law. The central limit theorem then suggests that NNs can be constructed to obey a physical law by choosing the activation functions such that they match a particular kernel in the infinite-width limit. The activation functions constructed in this way guarantee the NN to a priori obey the physics, up to the approximation error of non-infinite network width. Simple examples of the homogeneous 1D-Helmholtz equation are discussed and compared to naive kernels and activations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2021

Mixed neural network Gaussian processes

This paper makes two contributions. Firstly, it introduces mixed composi...
research
10/19/2020

Stationary Activations for Uncertainty Calibration in Deep Learning

We introduce a new family of non-linear neural network activation functi...
research
04/08/2023

Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions

There is a growing literature on the study of large-width properties of ...
research
06/10/2020

Scalable Partial Explainability in Neural Networks via Flexible Activation Functions

Achieving transparency in black-box deep learning algorithms is still an...
research
04/02/2020

Predicting the outputs of finite networks trained with noisy gradients

A recent line of studies has focused on the infinite width limit of deep...
research
01/03/2022

Deep neural networks for smooth approximation of physics with higher order and continuity B-spline base functions

This paper deals with the following important research question. Traditi...
research
03/30/2023

Neural signature kernels as infinite-width-depth-limits of controlled ResNets

Motivated by the paradigm of reservoir computing, we consider randomly i...

Please sign up or login with your details

Forgot password? Click here to reset