Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

11/17/2016
by   Marco Tulio Ribeiro, et al.
0

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.

READ FULL TEXT
research
11/22/2016

Programs as Black-Box Explanations

Recent work in model-agnostic explanations of black-box machine learning...
research
11/03/2020

MAIRE – A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

The paper introduces a novel framework for extracting model-agnostic hum...
research
05/22/2023

MaNtLE: Model-agnostic Natural Language Explainer

Understanding the internal reasoning behind the predictions of machine l...
research
05/16/2022

Model Agnostic Local Explanations of Reject

The application of machine learning based decision making systems in saf...
research
04/01/2020

Ontology-based Interpretable Machine Learning for Textual Data

In this paper, we introduce a novel interpreting framework that learns a...
research
10/13/2022

Self-explaining deep models with logic rule reasoning

We present SELOR, a framework for integrating self-explaining capabiliti...
research
08/16/2023

Epicure: Distilling Sequence Model Predictions into Patterns

Most machine learning models predict a probability distribution over con...

Please sign up or login with your details

Forgot password? Click here to reset