When Recurrent Models Don't Need To Be Recurrent

05/25/2018
by   John Miller, et al.
1

We prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Our result applies to a broad range of non-linear recurrent neural networks under a natural stability condition, which we observe is also necessary. Complementing our theoretical findings, we verify the conclusions of our theory on both real and synthetic tasks. Furthermore, we demonstrate recurrent models satisfying the stability assumption of our theory can have excellent performance on real sequence learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2019

Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability

Stability of recurrent models is closely linked with trainability, gener...
research
08/23/2023

Stabilizing RNN Gradients through Pre-training

Numerous theories of learning suggest to prevent the gradient variance f...
research
06/04/2021

Approximate Fixed-Points in Recurrent Neural Networks

Recurrent neural networks are widely used in speech and language process...
research
10/05/2014

Learning Topology and Dynamics of Large Recurrent Neural Networks

Large-scale recurrent networks have drawn increasing attention recently ...
research
06/23/2020

Lipschitz Recurrent Neural Networks

Differential equations are a natural choice for modeling recurrent neura...
research
10/06/2021

On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

We study the privacy implications of deploying recurrent neural networks...
research
03/25/2020

R-FORCE: Robust Learning for Random Recurrent Neural Networks

Random Recurrent Neural Networks (RRNN) are the simplest recurrent netwo...

Please sign up or login with your details

Forgot password? Click here to reset