
InputOutput Equivalence of Unitary and Contractive RNNs
Unitary recurrent neural networks (URNNs) have been proposed as a method...
read it

Learn Like The Pro: Norms from Theory to Size Neural Computation
The optimal design of neural networks is a critical problem in many appl...
read it

Liquid Timeconstant Recurrent Neural Networks as Universal Approximators
In this paper, we introduce the notion of liquid timeconstant (LTC) rec...
read it

Echo State Networks trained by Tikhonov least squares are L2(μ) approximators of ergodic dynamical systems
Echo State Networks (ESNs) are a class of singlelayer recurrent neural ...
read it

Dynamical System Parameter Identification using Deep Recurrent Cell Networks
In this paper, we investigate the parameter identification problem in dy...
read it

Approximation Bounds for Random Neural Networks and Reservoir Systems
This work studies approximation based on singlehiddenlayer feedforward...
read it

Conley's fundamental theorem for a class of hybrid systems
We establish versions of Conley's (i) fundamental theorem and (ii) decom...
read it
Metric Entropy Limits on Recurrent Neural Network Learning of Linear Dynamical Systems
One of the most influential results in neural network theory is the universal approximation theorem [1, 2, 3] which states that continuous functions can be approximated to within arbitrary accuracy by singlehiddenlayer feedforward neural networks. The purpose of this paper is to establish a result in this spirit for the approximation of general discretetime linear dynamical systems  including timevarying systems  by recurrent neural networks (RNNs). For the subclass of linear timeinvariant (LTI) systems, we devise a quantitative version of this statement. Specifically, measuring the complexity of the considered class of LTI systems through metric entropy according to [4], we show that RNNs can optimally learn  or identify in systemtheory parlance  stable LTI systems. For LTI systems whose inputoutput relation is characterized through a difference equation, this means that RNNs can learn the difference equation from inputoutput traces in a metricentropy optimal manner.
READ FULL TEXT
Comments
There are no comments yet.