Dilated Recurrent Neural Networks

10/05/2017
by   Shiyu Chang, et al.
0

Learning with recurrent neural networks (RNNs) on long sequences is a notoriously difficult task. There are three major challenges: 1) complex dependencies, 2) vanishing and exploding gradients, and 3) efficient parallelization. In this paper, we introduce a simple yet effective RNN connection structure, the DilatedRNN, which simultaneously tackles all of these challenges. The proposed architecture is characterized by multi-resolution dilated recurrent skip connections and can be combined flexibly with diverse RNN cells. Moreover, the DilatedRNN reduces the number of parameters needed and enhances training efficiency significantly, while matching state-of-the-art performance (even with standard RNN cells) in tasks involving very long-term dependencies. To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures. We rigorously prove the advantages of the DilatedRNN over other recurrent neural architectures. The code for our method is publicly available at https://github.com/code-terminator/DilatedRNN

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2017

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks

Recurrent Neural Networks (RNNs) continue to show outstanding performanc...
research
06/12/2020

MomentumRNN: Integrating Momentum into Recurrent Neural Networks

Designing deep neural networks is an art that often involves an expensiv...
research
10/25/2020

Attention is All You Need in Speech Separation

Recurrent Neural Networks (RNNs) have long been the dominant architectur...
research
02/14/2014

A Clockwork RNN

Sequence prediction and classification are ubiquitous and challenging pr...
research
02/26/2016

Architectural Complexity Measures of Recurrent Neural Networks

In this paper, we systematically analyze the connecting architectures of...
research
04/24/2023

Adaptive-saturated RNN: Remember more with less instability

Orthogonal parameterization is a compelling solution to the vanishing gr...
research
03/21/2018

Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks

Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of ...

Please sign up or login with your details

Forgot password? Click here to reset