Feature Importance Explanations for Temporal Black-Box Models

02/23/2021
by   Akshay Sood, et al.
0

Models in the supervised learning framework may capture rich and complex representations over the features that are hard for humans to interpret. Existing methods to explain such models are often specific to architectures and data where the features do not have a time-varying component. In this work, we propose TIME, a method to explain models that are inherently temporal in nature. Our approach (i) uses a model-agnostic permutation-based approach to analyze global feature importance, (ii) identifies the importance of salient features with respect to their temporal ordering as well as localized windows of influence, and (iii) uses hypothesis testing to provide statistical rigor.

READ FULL TEXT
research
01/26/2023

Permutation-based Hypothesis Testing for Neural Networks

Neural networks are powerful predictive models, but they provide little ...
research
11/18/2018

Understanding Learned Models by Identifying Important Features at the Right Resolution

In many application domains, it is important to characterize how complex...
research
03/31/2022

Interpretation of Black Box NLP Models: A Survey

An increasing number of machine learning models have been deployed in do...
research
06/16/2022

Inherent Inconsistencies of Feature Importance

The black-box nature of modern machine learning techniques invokes a pra...
research
03/03/2022

Label-Free Explainability for Unsupervised Models

Unsupervised black-box models are challenging to interpret. Indeed, most...
research
06/15/2021

S-LIME: Stabilized-LIME for Model Explanation

An increasing number of machine learning models have been deployed in do...
research
06/20/2019

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Motivated by the need to audit complex and black box models, there has b...

Please sign up or login with your details

Forgot password? Click here to reset