Implicit_bias_RNN
None
view repo
Contemporary wisdom based on empirical studies suggests that standard recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory. However, precise reasoning for this behavior is still unknown. This paper provides a rigorous explanation of this property in the special case of linear RNNs. Although this work is limited to linear RNNs, even these systems have traditionally been difficult to analyze due to their non-linear parameterization. Using recently-developed kernel regime analysis, our main result shows that linear RNNs learned from random initializations are functionally equivalent to a certain weighted 1D-convolutional network. Importantly, the weightings in the equivalent model cause an implicit bias to elements with smaller time lags in the convolution and hence, shorter memory. The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem. The theory is validated in both synthetic and real data experiments.
READ FULL TEXT
The advantage of recurrent neural networks (RNNs) in learning dependenci...
read it
Training RNNs to learn long-term dependencies is difficult due to vanish...
read it
Recurrent neural networks (RNNs) are widely used to model sequential dat...
read it
We introduce a class of convolutional neural networks (CNNs) that utiliz...
read it
We study the approximation properties and optimization dynamics of recur...
read it
Training recurrent neural networks (RNNs) with backpropagation through t...
read it
Truncated backpropagation through time (TBPTT) is a popular method for
l...
read it
None
Comments
There are no comments yet.