On orthogonality and learning recurrent networks with long term dependencies

01/31/2017
by   Eugene Vorontsov, et al.
0

It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and may therefore be a desirable property. This paper explores issues with optimization convergence, speed and gradient stability when encouraging or enforcing orthogonality. To perform this analysis, we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.

READ FULL TEXT
research
11/20/2015

Unitary Evolution Recurrent Neural Networks

Recurrent neural networks (RNNs) are notoriously difficult to train. Whe...
research
06/24/2016

Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks

Vanishing (and exploding) gradients effect is a common problem for recur...
research
10/27/2022

On the biological plausibility of orthogonal initialisation for solving gradient instability in deep neural networks

Initialising the synaptic weights of artificial neural networks (ANNs) w...
research
11/09/2015

Deep Recurrent Neural Networks for Sequential Phenotype Prediction in Genomics

In analyzing of modern biological data, we are often dealing with ill-po...
research
03/25/2018

Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization

Vanishing and exploding gradients are two of the main obstacles in train...
research
05/13/2018

Low-pass Recurrent Neural Networks - A memory architecture for longer-term correlation discovery

Reinforcement learning (RL) agents performing complex tasks must be able...
research
10/06/2018

h-detach: Modifying the LSTM Gradient Towards Better Optimization

Recurrent neural networks are known for their notorious exploding and va...

Please sign up or login with your details

Forgot password? Click here to reset