Understanding Learned Models by Identifying Important Features at the Right Resolution

11/18/2018
by   Kyubin Lee, et al.
0

In many application domains, it is important to characterize how complex learned models make their decisions across the distribution of instances. One way to do this is to identify the features and interactions among them that contribute to a model's predictive accuracy. We present a model-agnostic approach to this task that makes the following specific contributions. Our approach (i) tests feature groups, in addition to base features, and tries to determine the level of resolution at which important features can be determined, (ii) uses hypothesis testing to rigorously assess the effect of each feature on the model's loss, (iii) employs a hierarchical approach to control the false discovery rate when testing feature groups and individual base features for importance, and (iv) uses hypothesis testing to identify important interactions among features and feature groups. We evaluate our approach by analyzing random forest and LSTM neural network models learned in two challenging biomedical applications.

READ FULL TEXT
research
01/26/2023

Permutation-based Hypothesis Testing for Neural Networks

Neural networks are powerful predictive models, but they provide little ...
research
02/23/2021

Feature Importance Explanations for Temporal Black-Box Models

Models in the supervised learning framework may capture rich and complex...
research
04/13/2019

Interpretable hypothesis tests

Although hypothesis tests play a prominent role in Science, their interp...
research
03/29/2019

Interpreting Black Box Models with Statistical Guarantees

While many methods for interpreting machine learning models have been pr...
research
08/27/2021

Multiple Hypothesis Testing Framework for Spatial Signals

The problem of identifying regions of spatially interesting, different o...
research
05/05/2023

Statistical Inference for Fairness Auditing

Before deploying a black-box model in high-stakes problems, it is import...
research
06/20/2017

Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

Machine-learned models are often described as "black boxes". In many rea...

Please sign up or login with your details

Forgot password? Click here to reset