Learning Long Term Dependencies via Fourier Recurrent Units

03/17/2018
by   Jiong Zhang, et al.
0

It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states h^(t) along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU's residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2019

Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies

Modelling long-term dependencies is a challenge for recurrent neural net...
research
06/24/2016

Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks

Vanishing (and exploding) gradients effect is a common problem for recur...
research
04/03/2015

A Simple Way to Initialize Recurrent Networks of Rectified Linear Units

Learning long term dependencies in recurrent networks is difficult due t...
research
11/20/2015

Unitary Evolution Recurrent Neural Networks

Recurrent neural networks (RNNs) are notoriously difficult to train. Whe...
research
10/06/2018

h-detach: Modifying the LSTM Gradient Towards Better Optimization

Recurrent neural networks are known for their notorious exploding and va...
research
09/14/2021

Oscillatory Fourier Neural Network: A Compact and Efficient Architecture for Sequential Processing

Tremendous progress has been made in sequential processing with the rece...
research
03/25/2018

Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization

Vanishing and exploding gradients are two of the main obstacles in train...

Please sign up or login with your details

Forgot password? Click here to reset