CXPlain: Causal Explanations for Model Interpretation under Uncertainty

10/27/2019
by   Patrick Schwab, et al.
18

Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models. However, providing fast and accurate estimates of feature importance for high-dimensional data, and quantifying the uncertainty of such estimates remain open challenges. Here, we frame the task of providing explanations for the decisions of machine-learning models as a causal learning task, and train causal explanation (CXPlain) models that learn to estimate to what degree certain inputs cause outputs in another machine-learning model. CXPlain can, once trained, be used to explain the target model in little time, and enables the quantification of the uncertainty associated with its feature importance estimates via bootstrap ensembling. We present experiments that demonstrate that CXPlain is significantly more accurate and faster than existing model-agnostic methods for estimating feature importance. In addition, we confirm that the uncertainty estimates provided by CXPlain ensembles are strongly correlated with their ability to accurately estimate feature importance on held-out data.

READ FULL TEXT

page 4

page 5

page 8

page 9

page 10

page 14

page 16

page 18

research
06/23/2022

Explanatory causal effects for model agnostic explanations

This paper studies the problem of estimating the contributions of featur...
research
02/06/2018

Granger-causal Attentive Mixtures of Experts

Several methods have recently been proposed to detect salient input feat...
research
10/18/2021

On Predictive Explanation of Data Anomalies

Numerous algorithms have been proposed for detecting anomalies (outliers...
research
09/11/2020

Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

When machine learning supports decision-making in safety-critical system...
research
10/27/2020

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

Many existing approaches for estimating feature importance are problemat...
research
05/23/2019

Computationally Efficient Feature Significance and Importance for Machine Learning Models

We develop a simple and computationally efficient significance test for ...
research
12/10/2020

On Shapley Credit Allocation for Interpretability

We emphasize the importance of asking the right question when interpreti...

Please sign up or login with your details

Forgot password? Click here to reset