Fredholm integral equations for function approximation and the training of neural networks

03/09/2023
by   Patrick Gelß, et al.
0

We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2017

Machine Learning and Integral Equations

As both light transport simulation and reinforcement learning are ruled ...
research
07/21/2020

Application of orthonormal Bernoulli polynomials for approximate solution of some Volterra integral equations

In this work, a new approach has been developed to obtain numerical solu...
research
04/30/2022

NeuralEF: Deconstructing Kernels by Deep Neural Networks

Learning the principal eigenfunctions of an integral operator defined by...
research
08/02/2023

Boundary integrated neural networks (BINNs) for 2D elastostatic and piezoelectric problems: Theory and MATLAB code

In this paper, we make the first attempt to apply the boundary integrate...
research
09/15/2016

A Tutorial about Random Neural Networks in Supervised Learning

Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that ...
research
04/12/2022

Adaptive cross approximation for Tikhonov regularization in general form

Many problems in Science and Engineering give rise to linear integral eq...
research
07/26/2023

Fixed Integral Neural Networks

It is often useful to perform integration over learned functions represe...

Please sign up or login with your details

Forgot password? Click here to reset