Infinite-dimensional Folded-in-time Deep Neural Networks

by   Florian Stelzer, et al.

The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinite-dimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely speaking, the weights are described by Lebesgue integrable functions instead of step functions. We also provide a functional backpropagation algorithm, which enables gradient descent training of the weights. In addition, with a slight modification, our concept realizes recurrent neural networks.


page 1

page 2

page 3

page 4


Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

Recent works have examined how deep neural networks, which can solve a v...

Dynamics of Deep Neural Networks and Neural Tangent Hierarchy

The evolution of a deep neural network trained by the gradient descent c...

RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks

We propose RSO (random search optimization), a gradient free Markov Chai...

Simple2Complex: Global Optimization by Gradient Descent

A method named simple2complex for modeling and training deep neural netw...

A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity

Identifying, formalizing and combining biological mechanisms which imple...

Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing

Training deep neural networks results in strong learned representations ...

The differential calculus of causal functions

Causal functions of sequences occur throughout computer science, from th...