Constructing Gradient Controllable Recurrent Neural Networks Using Hamiltonian Dynamics

11/11/2019
by   Konstantin Rusch, et al.
0

Recurrent neural networks (RNNs) have gained a great deal of attention in solving sequential learning problems. The learning of long-term dependencies, however, remains challenging due to the problem of a vanishing or exploding hidden states gradient. By exploring further the recently established connections between RNNs and dynamical systems we propose a novel RNN architecture, which we call a Hamiltonian recurrent neural network (Hamiltonian RNN), based on a symplectic discretization of an appropriately chosen Hamiltonian system. The key benefit of this approach is that the corresponding RNN inherits the favorable long time properties of the Hamiltonian system, which in turn allows us to control the hidden states gradient with a hyperparameter of the Hamiltonian RNN architecture. This enables us to handle sequential learning problems with arbitrary sequence lengths, since for a range of values of this hyperparameter the gradient neither vanishes nor explodes. Additionally, we provide a heuristic for the optimal choice of the hyperparameter, which we use in our numerical simulations to illustrate that the Hamiltonian RNN is able to outperform other state-of-the-art RNNs without the need of computationally intensive hyperparameter optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2021

UnICORNN: A recurrent model for learning very long time dependencies

The design of recurrent neural networks (RNNs) to accurately process seq...
research
09/29/2019

Symplectic Recurrent Neural Networks

We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algo...
research
09/12/2017

RRA: Recurrent Residual Attention for Sequence Learning

In this paper, we propose a recurrent neural network (RNN) with residual...
research
11/09/2018

Complex Unitary Recurrent Neural Networks using Scaled Cayley Transform

Recurrent neural networks (RNNs) have been successfully used on a wide r...
research
08/22/2017

Twin Networks: Using the Future as a Regularizer

Being able to model long-term dependencies in sequential data, such as t...
research
06/12/2023

On the Dynamics of Learning Time-Aware Behavior with Recurrent Neural Networks

Recurrent Neural Networks (RNNs) have shown great success in modeling ti...

Please sign up or login with your details

Forgot password? Click here to reset