Learning Recurrent Neural Net Models of Nonlinear Systems

11/18/2020
by   Joshua Hanson, et al.
0

We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2022

Robust Recurrent Neural Network to Identify Ship Motion in Open Water with Performance Guarantees – Technical Report

Recurrent neural networks are capable of learning the dynamics of an unk...
research
09/28/2022

Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets

The statistical supervised learning framework assumes an input-output se...
research
03/29/2023

Learning Flow Functions from Data with Applications to Nonlinear Oscillators

We describe a recurrent neural network (RNN) based architecture to learn...
research
05/24/2022

Realization Theory Of Recurrent Neural ODEs Using Polynomial System Embeddings

In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and...
research
03/03/2016

Training Input-Output Recurrent Neural Networks through Spectral Methods

We consider the problem of training input-output recurrent neural networ...
research
12/08/2020

Deep Energy-Based NARX Models

This paper is directed towards the problem of learning nonlinear ARX mod...

Please sign up or login with your details

Forgot password? Click here to reset