Higher Order Recurrent Neural Networks

04/30/2016
by   Rohollah Soltani, et al.
0

In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8 data sets. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2015

Feedforward Sequential Memory Networks: A New Structure to Learn Long-term Dependency

In this paper, we propose a novel neural network structure, namely feedf...
research
05/01/2019

Learning higher-order sequential structure with cloned HMMs

Variable order sequence modeling is an important problem in artificial a...
research
06/08/2017

Gated Orthogonal Recurrent Units: On Learning to Forget

We present a novel recurrent neural network (RNN) based model that combi...
research
08/28/2018

Convolutional Neural Networks with Recurrent Neural Filters

We introduce a class of convolutional neural networks (CNNs) that utiliz...
research
03/23/2020

Depth Enables Long-Term Memory for Recurrent Neural Networks

A key attribute that drives the unprecedented success of modern Recurren...
research
08/22/2019

RNNs Evolving in Equilibrium: A Solution to the Vanishing and Exploding Gradients

Recurrent neural networks (RNNs) are particularly well-suited for modeli...
research
08/22/2019

RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?

Recurrent neural networks (RNNs) are particularly well-suited for modeli...

Please sign up or login with your details

Forgot password? Click here to reset