Operator theory, kernels, and Feedforward Neural Networks

01/03/2023
by   Palle E. T. Jorgensen, et al.
0

In this paper we show how specific families of positive definite kernels serve as powerful tools in analyses of iteration algorithms for multiple layer feedforward Neural Network models. Our focus is on particular kernels that adapt well to learning algorithms for data-sets/features which display intrinsic self-similarities at feedforward iterations of scaling.

READ FULL TEXT
research
05/14/2023

Conditional mean embeddings and optimal feature selection via positive definite kernels

Motivated by applications, we consider here new operator theoretic appro...
research
03/24/2015

Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks

We establish upper bounds for the minimal number of hidden units for whi...
research
09/26/2022

Unifying Model-Based and Neural Network Feedforward: Physics-Guided Neural Networks with Linear Autoregressive Dynamics

Unknown nonlinear dynamics often limit the tracking performance of feedf...
research
05/10/2005

Distant generalization by feedforward neural networks

This paper discusses the notion of generalization of training samples ov...
research
02/10/2020

Nonlinear Equation Solving: A Faster Alternative to Feedforward Computation

Feedforward computations, such as evaluating a neural network or samplin...
research
08/28/2023

Fast Feedforward Networks

We break the linear link between the layer size and its inference cost b...
research
05/09/2021

Holomorphic feedforward networks

A very popular model in machine learning is the feedforward neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset