Learning and Generalization in RNNs

05/31/2021
by   Abhishek Panigrahi, et al.
0

Simple recurrent neural networks (RNNs) and their more advanced cousins LSTMs etc. have been very successful in sequence modeling. Their theoretical understanding, however, is lacking and has not kept pace with the progress for feedforward networks, where a reasonably complete understanding in the special case of highly overparametrized one-hidden-layer networks has emerged. In this paper, we make progress towards remedying this situation by proving that RNNs can learn functions of sequences. In contrast to the previous work that could only deal with functions of sequences that are sums of functions of individual tokens in the sequence, we allow general functions. Conceptually and technically, we introduce new ideas which enable us to extract information from the hidden state of the RNN in our proofs – addressing a crucial weakness in previous work. We illustrate our results on some regular language recognition problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset