Learning in Feedback-driven Recurrent Spiking Neural Networks using full-FORCE Training

05/26/2022
by   Ankita Paul, et al.
0

Feedback-driven recurrent spiking neural networks (RSNNs) are powerful computational models that can mimic dynamical systems. However, the presence of a feedback loop from the readout to the recurrent layer de-stabilizes the learning mechanism and prevents it from converging. Here, we propose a supervised training procedure for RSNNs, where a second network is introduced only during the training, to provide hint for the target dynamics. The proposed training procedure consists of generating targets for both recurrent and readout layers (i.e., for a full RSNN system). It uses the recursive least square-based First-Order and Reduced Control Error (FORCE) algorithm to fit the activity of each layer to its target. The proposed full-FORCE training procedure reduces the amount of modifications needed to keep the error between the output and target close to zero. These modifications control the feedback loop, which causes the training to converge. We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems using RSNNs with leaky integrate and fire (LIF) neurons and rate coding. For energy-efficient hardware implementation, an alternative time-to-first-spike (TTFS) coding is implemented for the full- FORCE training procedure. Compared to rate coding, full-FORCE with TTFS coding generates fewer spikes and facilitates faster convergence to the target dynamics.

READ FULL TEXT
research
06/04/2021

Training Energy-Efficient Deep Spiking Neural Networks with Time-to-First-Spike Coding

The tremendous energy consumption of deep neural networks (DNNs) has bec...
research
08/12/2023

Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks

Spiking neural networks (SNNs) are emerging as an energy-efficient alter...
research
03/18/2017

A wake-sleep algorithm for recurrent, spiking neural networks

We investigate a recently proposed model for cortical computation which ...
research
09/08/2020

Nonlinear computations in spiking neural networks through multiplicative synapses

The brain performs many nonlinear computations through intricate spiking...
research
10/09/2017

full-FORCE: A Target-Based Method for Training Recurrent Networks

Trained recurrent networks are powerful tools for modeling dynamic neura...
research
03/25/2020

R-FORCE: Robust Learning for Random Recurrent Neural Networks

Random Recurrent Neural Networks (RRNN) are the simplest recurrent netwo...
research
07/06/2022

Composite FORCE learning of chaotic echo state networks for time-series prediction

Echo state network (ESN), a kind of recurrent neural networks, consists ...

Please sign up or login with your details

Forgot password? Click here to reset