Learning fixed points of recurrent neural networks by reparameterizing the network model

07/13/2023
by   Vicky Zhu, et al.
0

In computational neuroscience, fixed points of recurrent neural networks are commonly used to model neural responses to static or slowly changing stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. A natural approach is to use gradient descent on the Euclidean space of synaptic weights. We show that this approach can lead to poor learning performance due, in part, to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produces more robust learning dynamics. We show that these learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.

READ FULL TEXT
research
05/30/2023

Synaptic Weight Distributions Depend on the Geometry of Plasticity

Most learning algorithms in machine learning rely on gradient descent to...
research
06/21/2012

A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity

Identifying, formalizing and combining biological mechanisms which imple...
research
12/29/2016

A Basic Recurrent Neural Network Model

We present a model of a basic recurrent neural network (or bRNN) that in...
research
08/29/2016

About Learning in Recurrent Bistable Gradient Networks

Recurrent Bistable Gradient Networks are attractor based neural networks...
research
05/12/2021

CCN GAC Workshop: Issues with learning in biological recurrent neural networks

This perspective piece came about through the Generative Adversarial Col...
research
06/23/2020

Thalamocortical motor circuit insights for more robust hierarchical control of complex sequences

We study learning of recurrent neural networks that produce temporal seq...
research
12/02/2022

Vector Symbolic Finite State Machines in Attractor Neural Networks

Hopfield attractor networks are robust distributed models of human memor...

Please sign up or login with your details

Forgot password? Click here to reset