Time-Adaptive Recurrent Neural Networks

04/11/2022
by   Mantas Lukoševičius, et al.
0

Data are often sampled irregularly in time. Dealing with this using Recurrent Neural Networks (RNNs) traditionally involved ignoring the fact, feeding the time differences as additional inputs, or resampling the data. All these methods have their shortcomings. We propose an elegant alternative approach where instead the RNN is in effect resampled in time to match the time of the data. We use Echo State Network (ESN) and Gated Recurrent Unit (GRU) as the basis for our solution. Such RNNs can be seen as discretizations of continuous-time dynamical systems, which gives a solid theoretical ground for our approach. Similar recent observations have been made in feed-forward neural networks as neural ordinary differential equations. Our Time-Adaptive ESN (TAESN) and GRU (TAGRU) models allow for a direct model time setting and require no additional training, parameter tuning, or computation compared to the regular counterparts, thus retaining their original efficiency. We confirm empirically that our models can effectively compensate for the time-non-uniformity of the data and demonstrate that they compare favorably to data resampling, classical RNN methods, and alternative RNN models proposed to deal with time irregularities on several real-world nonuniform-time datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2019

Latent ODEs for Irregularly-Sampled Time Series

Time series with non-uniform intervals occur in many applications, and a...
research
11/21/2019

System Identification with Time-Aware Neural Sequence Models

Established recurrent neural networks are well-suited to solve a wide va...
research
05/25/2017

Predictive State Recurrent Neural Networks

We present a new model, Predictive State Recurrent Neural Networks (PSRN...
research
12/28/2022

Continuous Depth Recurrent Neural Differential Equations

Recurrent neural networks (RNNs) have brought a lot of advancements in s...
research
04/24/2017

k-FFNN: A priori knowledge infused Feed-forward Neural Networks

Recurrent neural network (RNN) are being extensively used over feed-forw...
research
08/18/2016

Decoupled Neural Interfaces using Synthetic Gradients

Training directed neural networks typically requires forward-propagating...
research
08/23/2023

Stabilizing RNN Gradients through Pre-training

Numerous theories of learning suggest to prevent the gradient variance f...

Please sign up or login with your details

Forgot password? Click here to reset