A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity

06/21/2012
by   Mathieu Galtier, et al.
0

Identifying, formalizing and combining biological mechanisms which implement known brain functions, such as prediction, is a main aspect of current research in theoretical neuroscience. In this letter, the mechanisms of Spike Timing Dependent Plasticity (STDP) and homeostatic plasticity, combined in an original mathematical formalism, are shown to shape recurrent neural networks into predictors. Following a rigorous mathematical treatment, we prove that they implement the online gradient descent of a distance between the network activity and its stimuli. The convergence to an equilibrium, where the network can spontaneously reproduce or predict its stimuli, does not suffer from bifurcation issues usually encountered in learning in recurrent neural networks.

READ FULL TEXT
research
07/13/2023

Learning fixed points of recurrent neural networks by reparameterizing the network model

In computational neuroscience, fixed points of recurrent neural networks...
research
05/12/2021

CCN GAC Workshop: Issues with learning in biological recurrent neural networks

This perspective piece came about through the Generative Adversarial Col...
research
01/27/2023

Interpreting learning in biological neural networks as zero-order optimization method

Recently, significant progress has been made regarding the statistical u...
research
01/08/2021

Infinite-dimensional Folded-in-time Deep Neural Networks

The method recently introduced in arXiv:2011.10115 realizes a deep neura...
research
01/25/2019

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

The way how recurrently connected networks of spiking neurons in the bra...
research
06/20/2016

Neural networks with differentiable structure

While gradient descent has proven highly successful in learning connecti...
research
10/05/2020

A fast memoryless predictive algorithm in a chain of recurrent neural networks

In the recent publication (arxiv:2007.08063v2 [cs.LG]) a fast prediction...

Please sign up or login with your details

Forgot password? Click here to reset