Recent Advances in Recurrent Neural Networks

12/29/2017
by   Hojjat Salehinejad, et al.
0

Recurrent neural networks (RNNs) are capable of learning features and long term dependencies from sequential and time-series data. The RNNs have a stack of non-linear units where at least one connection between units forms a directed cycle. A well-trained RNN can model any dynamical system; however, training RNNs is mostly plagued by issues in learning long-term dependencies. In this paper, we present a survey on RNNs and several new advances for newcomers and professionals in the field. The fundamentals and recent advances are explained and the research challenges are introduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2016

Learning Over Long Time Lags

The advantage of recurrent neural networks (RNNs) in learning dependenci...
research
11/09/2015

Deep Recurrent Neural Networks for Sequential Phenotype Prediction in Genomics

In analyzing of modern biological data, we are often dealing with ill-po...
research
06/08/2017

Gated Orthogonal Recurrent Units: On Learning to Forget

We present a novel recurrent neural network (RNN) based model that combi...
research
05/23/2019

Population-based Global Optimisation Methods for Learning Long-term Dependencies with RNNs

Despite recent innovations in network architectures and loss functions, ...
research
08/08/2016

Syntactically Informed Text Compression with Recurrent Neural Networks

We present a self-contained system for constructing natural language mod...
research
08/22/2017

Twin Networks: Using the Future as a Regularizer

Being able to model long-term dependencies in sequential data, such as t...
research
11/05/2021

Recurrent Neural Networks for Learning Long-term Temporal Dependencies with Reanalysis of Time Scale Representation

Recurrent neural networks with a gating mechanism such as an LSTM or GRU...

Please sign up or login with your details

Forgot password? Click here to reset