Equilibrated Recurrent Neural Network: Neuronal Time-Delayed Self-Feedback Improves Accuracy and Stability

03/02/2019
by   Ziming Zhang, et al.
0

We propose a novel Equilibrated Recurrent Neural Network (ERNN) to combat the issues of inaccuracy and instability in conventional RNNs. Drawing upon the concept of autapse in neuroscience, we propose augmenting an RNN with a time-delayed self-feedback loop. Our sole purpose is to modify the dynamics of each internal RNN state and, at any time, enforce it to evolve close to the equilibrium point associated with the input signal at that time. We show that such self-feedback helps stabilize the hidden state transitions leading to fast convergence during training while efficiently learning discriminative latent features that result in state-of-the-art results on several benchmark datasets at test-time. We propose a novel inexact Newton method to solve fixed-point conditions given model parameters for generating the latent features at each hidden state. We prove that our inexact Newton method converges locally with linear rate (under mild conditions). We leverage this result for efficient training of ERNNs based on backpropagation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2013

How to Construct Deep Recurrent Neural Networks

In this paper, we explore different ways to extend a recurrent neural ne...
research
07/14/2020

Shuffling Recurrent Neural Networks

We propose a novel recurrent neural network model, where the hidden stat...
research
10/12/2020

RNN Training along Locally Optimal Trajectories via Frank-Wolfe Algorithm

We propose a novel and efficient training method for RNNs by iteratively...
research
08/27/2016

Multi-Path Feedback Recurrent Neural Network for Scene Parsing

In this paper, we consider the scene parsing problem and propose a novel...
research
06/07/2020

Fusion Recurrent Neural Network

Considering deep sequence learning for practical application, two repres...
research
08/22/2019

RNNs Evolving in Equilibrium: A Solution to the Vanishing and Exploding Gradients

Recurrent neural networks (RNNs) are particularly well-suited for modeli...
research
05/03/2020

Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computa...

Please sign up or login with your details

Forgot password? Click here to reset