Metric entropy of causal, discrete-time LTI systems

11/28/2022
by   Clemens Hutter, et al.
0

In [1] it is shown that recurrent neural networks (RNNs) can learn - in a metric entropy optimal manner - discrete time, linear time-invariant (LTI) systems. This is effected by comparing the number of bits needed to encode the approximating RNN to the metric entropy of the class of LTI systems under consideration [2, 3]. The purpose of this note is to provide an elementary self-contained proof of the metric entropy results in [2, 3], in the process of which minor mathematical issues appearing in [2, 3] are cleaned up. These corrections also lead to the correction of a constant in a result in [1] (see Remark 2.5).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2021

Metric Entropy Limits on Recurrent Neural Network Learning of Linear Dynamical Systems

One of the most influential results in neural network theory is the univ...
research
04/11/2022

Maximum entropy optimal density control of discrete-time linear systems and Schrödinger bridges

We consider an entropy-regularized version of optimal density control of...
research
11/19/2017

The destiny of constant structure discrete time closed semantic systems

Constant structure closed semantic systems are the systems each element ...
research
04/18/2023

An Information-Theoretic Analysis of Discrete-Time Control and Filtering Limitations by the I-MMSE Relationships

Fundamental limitations or performance trade-offs/limits are important p...
research
03/29/2023

Learning Flow Functions from Data with Applications to Nonlinear Oscillators

We describe a recurrent neural network (RNN) based architecture to learn...
research
09/04/2019

Optimal translational-rotational invariant dictionaries for images

We provide the construction of a set of square matrices whose translates...
research
08/04/2023

Universal Approximation of Linear Time-Invariant (LTI) Systems through RNNs: Power of Randomness in Reservoir Computing

Recurrent neural networks (RNNs) are known to be universal approximators...

Please sign up or login with your details

Forgot password? Click here to reset