A Dynamically Controlled Recurrent Neural Network for Modeling Dynamical Systems

10/31/2019
by   Yiwei Fu, et al.
17

This work proposes a novel neural network architecture, called the Dynamically Controlled Recurrent Neural Network (DCRNN), specifically designed to model dynamical systems that are governed by ordinary differential equations (ODEs). The current state vectors of these types of dynamical systems only depend on their state-space models, along with the respective inputs and initial conditions. Long Short-Term Memory (LSTM) networks, which have proven to be very effective for memory-based tasks, may fail to model physical processes as they tend to memorize, rather than learn how to capture the information on the underlying dynamics. The proposed DCRNN includes learnable skip-connections across previously hidden states, and introduces a regularization term in the loss function by relying on Lyapunov stability theory. The regularizer enables the placement of eigenvalues of the transfer function induced by the DCRNN to desired values, thereby acting as an internal controller for the hidden state trajectory. The results show that, for forecasting a chaotic dynamical system, the DCRNN outperforms the LSTM in 100 out of 100 randomized experiments by reducing the mean squared error of the LSTM's forecasting by 80.0%± 3.0%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2018

Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long-Short Term Memory Networks

We introduce a data-driven forecasting method for high dimensional, chao...
research
04/11/2019

GP-HD: Using Genetic Programming to Generate Dynamical Systems Models for Health Care

The huge wealth of data in the health domain can be exploited to create ...
research
05/24/2023

Reconstruction, forecasting, and stability of chaotic dynamics from partial data

The forecasting and computation of the stability of chaotic systems from...
research
04/27/2021

Initializing LSTM internal states via manifold learning

We present an approach, based on learning an intrinsic data manifold, fo...
research
10/08/2019

Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization

Vanilla RNN with ReLU activation have a simple structure that is amenabl...
research
07/20/2023

Leveraging arbitrary mobile sensor trajectories with shallow recurrent decoder networks for full-state reconstruction

Sensing is one of the most fundamental tasks for the monitoring, forecas...
research
06/10/2020

Entanglement-Embedded Recurrent Network Architecture: Tensorized Latent State Propagation and Chaos Forecasting

Chaotic time series forecasting has been far less understood despite its...

Please sign up or login with your details

Forgot password? Click here to reset