Supervised Local Modeling for Interpretability

07/09/2018
by   Gregory Plumb, et al.
6

Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability is designing explanation systems that can capture aspects of each of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called SLIM that uses local linear modeling techniques along with a dual interpretation of random forests (both as a supervised neighborhood approach and as a feature selection method). SLIM has two fundamental advantages over existing interpretability systems. First, while it is effective as a black-box explanation system, SLIM itself is a highly accurate predictive model that provides faithful self explanations, and thus sidesteps the typical accuracy-interpretability trade-off. Second, SLIM provides both example- based and local explanations and can detect global patterns, which allows it to diagnose limitations in its local explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2019

Regularizing Black-box Models for Improved Interpretability

Most work on interpretability in machine learning has focused on designi...
research
04/30/2022

ExSum: From Local Explanations to Model Understanding

Interpretability methods are developed to understand the working mechani...
research
05/11/2019

Explainable AI for Trees: From Local Explanations to Global Understanding

Tree-based machine learning models such as random forests, decision tree...
research
04/04/2019

A Categorisation of Post-hoc Explanations for Predictive Models

The ubiquity of machine learning based predictive models in modern socie...
research
07/17/2020

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...
research
05/09/2023

When a CBR in Hand is Better than Twins in the Bush

AI methods referred to as interpretable are often discredited as inaccur...
research
10/19/2020

A Framework to Learn with Interpretation

With increasingly widespread use of deep neural networks in critical dec...

Please sign up or login with your details

Forgot password? Click here to reset