R-FORCE: Robust Learning for Random Recurrent Neural Networks

03/25/2020
by   Yang Zheng, et al.
1

Random Recurrent Neural Networks (RRNN) are the simplest recurrent networks to model and extract features from sequential data. The simplicity however comes with a price; RRNN are known to be susceptible to diminishing/exploding gradient problem when trained with gradient-descent based optimization. To enhance robustness of RRNN, alternative training approaches have been proposed. Specifically, FORCE learning approach proposed a recursive least squares alternative to train RRNN and was shown to be applicable even for the challenging task of target-learning, where the network is tasked with generating dynamic patterns with no guiding input. While FORCE training indicates that solving target-learning is possible, it appears to be effective only in a specific regime of network dynamics (edge-of-chaos). We thereby investigate whether initialization of RRNN connectivity according to a tailored distribution can guarantee robust FORCE learning. We are able to generate such distribution by inference of four generating principles constraining the spectrum of the network Jacobian to remain in stability region. This initialization along with FORCE learning provides a robust training method, i.e., Robust-FORCE (R-FORCE). We validate R-FORCE performance on various target functions for a wide range of network configurations and compare with alternative methods. Our experiments indicate that R-FORCE facilitates significantly more stable and accurate target-learning for a wide class of RRNN. Such stability becomes critical in modeling multi-dimensional sequences as we demonstrate on modeling time-series of human body joints during physical movements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2017

full-FORCE: A Target-Based Method for Training Recurrent Networks

Trained recurrent networks are powerful tools for modeling dynamic neura...
research
07/06/2022

Composite FORCE learning of chaotic echo state networks for time-series prediction

Echo state network (ESN), a kind of recurrent neural networks, consists ...
research
05/26/2022

Learning in Feedback-driven Recurrent Spiking Neural Networks using full-FORCE Training

Feedback-driven recurrent spiking neural networks (RSNNs) are powerful c...
research
05/25/2018

When Recurrent Models Don't Need To Be Recurrent

We prove stable recurrent neural networks are well approximated by feed-...
research
12/22/2019

Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability

Stability of recurrent models is closely linked with trainability, gener...
research
03/01/2022

Molecular Dynamics of Polymer-lipids in Solution from Supervised Machine Learning

Machine learning techniques including neural networks are popular tools ...
research
06/16/2020

Progressive Skeletonization: Trimming more fat from a network at initialization

Recent studies have shown that skeletonization (pruning parameters) of n...

Please sign up or login with your details

Forgot password? Click here to reset