Explaining a Series of Models by Propagating Local Feature Attributions

04/30/2021
by   Hugh Chen, et al.
0

Pipelines involving a series of several machine learning models (e.g., stacked generalization ensembles, neural network feature extractors) improve performance in many domains but are difficult to understand. To improve their transparency, we introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value. Our framework enables us to (1) draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction, (2) draw important insights about the errors a mortality prediction model makes by explaining a loss that is a non-linear transformation of the model's output, (3) explain pipelines of deep feature extractors fed into a tree model for MNIST digit classification, and (4) interpret important consumer scores and raw features in a stacked generalization setting to predict risk for home equity line of credit applications. Importantly, in the consumer scoring example, DeepSHAP is the only feature attribution technique we are aware of that allows independent entities (e.g., lending institutions, credit bureaus) to compute attributions for the original features without having to share their proprietary models. Quantitatively comparing our framework to model-agnostic approaches, we show that our approach is an order of magnitude faster while providing equally salient explanations. In addition, we describe how to incorporate an empirical baseline distribution, which allows us to (1) demonstrate the bias of previous approaches that use a single baseline sample, and (2) present a straightforward methodology for choosing meaningful baseline distributions.

READ FULL TEXT

page 5

page 26

page 27

research
03/01/2021

Explainable AI in Credit Risk Management

Artificial Intelligence (AI) has created the single biggest technology r...
research
09/11/2020

Accurate and Intuitive Contextual Explanations using Linear Model Trees

With the ever-increasing use of complex machine learning models in criti...
research
05/11/2019

Explainable AI for Trees: From Local Explanations to Global Understanding

Tree-based machine learning models such as random forests, decision tree...
research
11/05/2021

Predicting Mortality from Credit Reports

Data on hundreds of variables related to individual consumer finance beh...
research
11/27/2019

Explaining Models by Propagating Shapley Values of Local Components

In healthcare, making the best possible predictions with complex models ...
research
09/30/2022

Contrastive Corpus Attribution for Explaining Representations

Despite the widespread use of unsupervised models, very few methods are ...

Please sign up or login with your details

Forgot password? Click here to reset