Temporal Dependencies in Feature Importance for Time Series Predictions

07/29/2021
by   Clayton Rooke, et al.
0

Explanation methods applied to sequential models for multivariate time series prediction are receiving more attention in machine learning literature. While current methods perform well at providing instance-wise explanations, they struggle to efficiently and accurately make attributions over long periods of time and with complex feature interactions. We propose WinIT, a framework for evaluating feature importance in time series prediction settings by quantifying the shift in predictive distribution over multiple instances in a windowed setting. Comprehensive empirical evidence shows our method improves on the previous state-of-the-art, FIT, by capturing temporal dependencies in feature importance. We also demonstrate how the solution improves the appropriate attribution of features within time steps, which existing interpretability methods often fail to do. We compare with baselines on simulated and real-world clinical data. WinIT achieves 2.47x better performance than FIT and other feature importance methods on real-world clinical MIMIC-mortality task. The code for this work is available at https://github.com/layer6ai-labs/WinIT.

READ FULL TEXT

page 1

page 5

page 6

research
03/05/2020

What went wrong and when? Instance-wise Feature Importance for Time-series Models

Multivariate time series models are poised to be used for decision suppo...
research
04/21/2022

Ultra-marginal Feature Importance

Scientists frequently prioritize learning from data rather than training...
research
06/05/2023

Time Interpret: a Unified Model Interpretability Library for Time Series

We introduce , a library designed as an extension of Captum, with a spec...
research
06/09/2021

Explaining Time Series Predictions with Dynamic Masks

How can we explain the predictions of a machine learning model? When the...
research
11/22/2022

OpenFE: Automated Feature Generation beyond Expert-level Performance

The goal of automated feature generation is to liberate machine learning...
research
05/20/2022

Neural Additive Models for Nowcasting

Deep neural networks (DNNs) are one of the most highlighted methods in m...
research
10/06/2022

Conditional Feature Importance for Mixed Data

Despite the popularity of feature importance measures in interpretable m...

Please sign up or login with your details

Forgot password? Click here to reset