
InputOutput Equivalence of Unitary and Contractive RNNs
Unitary recurrent neural networks (URNNs) have been proposed as a method...
read it

Learning Recurrent Neural Net Models of Nonlinear Systems
We consider the following learning problem: Given sample pairs of input ...
read it

Divide and Conquer Networks
We consider the learning of algorithmic tasks by mere observation of inp...
read it

Survey on the attention based RNN model and its applications in computer vision
The recurrent neural networks (RNN) can be used to solve the sequence to...
read it

Linear Memory Networks
Recurrent neural networks can learn complex transduction problems that r...
read it

Input Switched Affine Networks: An RNN Architecture Designed for Interpretability
There exist many problem domains where the interpretability of neural ne...
read it

Restoration and extrapolation of structural transformation by dynamical general equilibrium feedbacks
We model sectoral production by serially nesting (cascading) binary comp...
read it
Training InputOutput Recurrent Neural Networks through Spectral Methods
We consider the problem of training inputoutput recurrent neural networks (RNN) for sequence labeling tasks. We propose a novel spectral approach for learning the network parameters. It is based on decomposition of the crossmoment tensor between the output and a nonlinear transformation of the input, based on score functions. We guarantee consistent learning with polynomial sample and computational complexity under transparent conditions such as nondegeneracy of model parameters, polynomial activations for the neurons, and a Markovian evolution of the input sequence. We also extend our results to Bidirectional RNN which uses both previous and future information to output the label at each time point, and is employed in many NLP tasks such as POS tagging.
READ FULL TEXT
Comments
There are no comments yet.