Shuffling Recurrent Neural Networks

07/14/2020
by   Michael Rotman, et al.
0

We propose a novel recurrent neural network model, where the hidden state h_t is obtained by permuting the vector elements of the previous hidden state h_t-1 and adding the output of a learned function b(x_t) of the input x_t at time t. In our model, the prediction is given by a second learned function, which is applied to the hidden state s(h_t). The method is easy to implement, extremely efficient, and does not suffer from vanishing nor exploding gradients. In an extensive set of experiments, the method shows competitive results, in comparison to the leading literature baselines.

READ FULL TEXT

page 5

page 8

page 11

research
02/23/2017

Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation

A recurrent neural network model of phonological pattern learning is pro...
research
03/02/2019

Equilibrated Recurrent Neural Network: Neuronal Time-Delayed Self-Feedback Improves Accuracy and Stability

We propose a novel Equilibrated Recurrent Neural Network (ERNN) to comb...
research
12/29/2016

A Basic Recurrent Neural Network Model

We present a model of a basic recurrent neural network (or bRNN) that in...
research
11/20/2015

Unitary Evolution Recurrent Neural Networks

Recurrent neural networks (RNNs) are notoriously difficult to train. Whe...
research
05/23/2017

Grounded Recurrent Neural Networks

In this work, we present the Grounded Recurrent Neural Network (GRNN), a...
research
10/31/2016

Full-Capacity Unitary Recurrent Neural Networks

Recurrent neural networks are powerful models for processing sequential ...

Please sign up or login with your details

Forgot password? Click here to reset