Infinite-dimensional Folded-in-time Deep Neural Networks

01/08/2021
by   Florian Stelzer, et al.
0

The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinite-dimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely speaking, the weights are described by Lebesgue integrable functions instead of step functions. We also provide a functional backpropagation algorithm, which enables gradient descent training of the weights. In addition, with a slight modification, our concept realizes recurrent neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/15/2021

Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

Recent works have examined how deep neural networks, which can solve a v...
09/18/2019

Dynamics of Deep Neural Networks and Neural Tangent Hierarchy

The evolution of a deep neural network trained by the gradient descent c...
05/12/2020

RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks

We propose RSO (random search optimization), a gradient free Markov Chai...
05/02/2016

Simple2Complex: Global Optimization by Gradient Descent

A method named simple2complex for modeling and training deep neural netw...
06/21/2012

A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity

Identifying, formalizing and combining biological mechanisms which imple...
02/02/2018

Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing

Training deep neural networks results in strong learned representations ...
04/24/2019

The differential calculus of causal functions

Causal functions of sequences occur throughout computer science, from th...