Shuffling Recurrent Neural Networks

07/14/2020
by   Michael Rotman, et al.
0

We propose a novel recurrent neural network model, where the hidden state h_t is obtained by permuting the vector elements of the previous hidden state h_t-1 and adding the output of a learned function b(x_t) of the input x_t at time t. In our model, the prediction is given by a second learned function, which is applied to the hidden state s(h_t). The method is easy to implement, extremely efficient, and does not suffer from vanishing nor exploding gradients. In an extensive set of experiments, the method shows competitive results, in comparison to the leading literature baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset