Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting

12/19/2019
by   Bryan Lim, et al.
20

Multi-horizon forecasting problems often contain a complex mix of inputs – including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically – without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.

READ FULL TEXT
research
12/15/2022

Put Attention to Temporal Saliency Patterns of Multi-Horizon Time Series

Time series, sets of sequences in chronological order, are essential dat...
research
03/03/2022

Bayesian Spillover Graphs for Dynamic Networks

We present Bayesian Spillover Graphs (BSG), a novel method for learning ...
research
10/06/2022

Temporal Spatial Decomposition and Fusion Network for Time Series Forecasting

Feature engineering is required to obtain better results for time series...
research
10/28/2022

A Long-term Dependent and Trustworthy Approach to Reactor Accident Prognosis based on Temporal Fusion Transformer

Prognosis of the reactor accident is a crucial way to ensure appropriate...
research
07/18/2023

Knowledge-infused Deep Learning Enables Interpretable Landslide Forecasting

Forecasting how landslides will evolve over time or whether they will fa...
research
01/05/2023

DANLIP: Deep Autoregressive Networks for Locally Interpretable Probabilistic Forecasting

Despite the high performance of neural network-based time series forecas...
research
05/23/2022

Interpretable Feature Engineering for Time Series Predictors using Attention Networks

Regression problems with time-series predictors are common in banking an...

Please sign up or login with your details

Forgot password? Click here to reset