Granger-causal Attentive Mixtures of Experts

02/06/2018
by   Patrick Schwab, et al.
0

Several methods have recently been proposed to detect salient input features for outputs of neural networks. Those methods offer a qualitative glimpse at feature importance, but they fall short of providing quantifiable attributions that can be compared across decisions and measures of the expected quality of their explanations. To address these shortcomings, we present an attentive mixture of experts (AME) that couples attentive gating with a Granger-causal objective to jointly produce accurate predictions as well as measures of feature importance. We demonstrate the utility of AMEs by determining factors driving demand for medical prescriptions, comparing predictive features for Parkinson's disease and pinpointing discriminatory genes across cancer types.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2019

CXPlain: Causal Explanations for Model Interpretation under Uncertainty

Feature importance estimates that inform users about the degree to which...
research
02/23/2021

Model-Attentive Ensemble Learning for Sequence Modeling

Medical time-series datasets have unique characteristics that make predi...
research
04/13/2017

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks

In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an a...
research
06/15/2021

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

Global model-agnostic feature importance measures either quantify whethe...
research
05/13/2020

A Mixture of h-1 Heads is Better than h Heads

Multi-head attentive neural architectures have achieved state-of-the-art...
research
12/10/2020

On Shapley Credit Allocation for Interpretability

We emphasize the importance of asking the right question when interpreti...

Please sign up or login with your details

Forgot password? Click here to reset