On the balance between the training time and interpretability of neural ODE for time series modelling

06/07/2022
by   Yakov Golovanev, et al.
0

Most machine learning methods are used as a black box for modelling. We may try to extract some knowledge from physics-based training methods, such as neural ODE (ordinary differential equation). Neural ODE has advantages like a possibly higher class of represented functions, the extended interpretability compared to black-box machine learning models, ability to describe both trend and local behaviour. Such advantages are especially critical for time series with complicated trends. However, the known drawback is the high training time compared to the autoregressive models and long-short term memory (LSTM) networks widely used for data-driven time series modelling. Therefore, we should be able to balance interpretability and training time to apply neural ODE in practice. The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications. The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools. The only interpretation that could be extracted is the eigenspace of the operator, which is an ill-posed problem for a large system. Spectra could be extracted using different classical analysis methods that do not have the drawback of extended time. Consequently, we reduce the neural ODE to a simpler linear form and propose a new view on time-series modelling using combined neural networks and an ODE system approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2022

ARMA Cell: A Modular and Effective Approach for Neural Autoregressive Modeling

The autoregressive moving average (ARMA) model is a classical, and argua...
research
02/28/2019

Financial series prediction using Attention LSTM

Financial time series prediction, especially with machine learning techn...
research
01/12/2022

Generative time series models using Neural ODE in Variational Autoencoders

In this paper, we implement Neural Ordinary Differential Equations in a ...
research
10/11/2020

Interpretable Neural Networks for Panel Data Analysis in Economics

The lack of interpretability and transparency are preventing economists ...
research
03/09/2023

On the Soundness of XAI in Prognostics and Health Management (PHM)

The aim of Predictive Maintenance, within the field of Prognostics and H...
research
04/16/2021

Integrating Domain Knowledge in Data-driven Earth Observation with Process Convolutions

The modelling of Earth observation data is a challenging problem, typica...
research
03/02/2023

Interpretable System Identification and Long-term Prediction on Time-Series Data

Time-series prediction has drawn considerable attention during the past ...

Please sign up or login with your details

Forgot password? Click here to reset