
Gradienttrained Weights in Wide Neural Networks Align Layerwise to Errorscaled Input Correlations
Recent works have examined how deep neural networks, which can solve a v...
read it

Dynamics of Deep Neural Networks and Neural Tangent Hierarchy
The evolution of a deep neural network trained by the gradient descent c...
read it

RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
We propose RSO (random search optimization), a gradient free Markov Chai...
read it

Simple2Complex: Global Optimization by Gradient Descent
A method named simple2complex for modeling and training deep neural netw...
read it

A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity
Identifying, formalizing and combining biological mechanisms which imple...
read it

Deep Learning with a Single Neuron: Folding a Deep Neural Network in Time using FeedbackModulated Delay Loops
Deep neural networks are among the most widely applied machine learning ...
read it

The differential calculus of causal functions
Causal functions of sequences occur throughout computer science, from th...
read it
Infinitedimensional Foldedintime Deep Neural Networks
The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinitedimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely speaking, the weights are described by Lebesgue integrable functions instead of step functions. We also provide a functional backpropagation algorithm, which enables gradient descent training of the weights. In addition, with a slight modification, our concept realizes recurrent neural networks.
READ FULL TEXT
Comments
There are no comments yet.