Assessing the Stability of Interpretable Models

10/22/2018
by   Riccardo Guidotti, et al.
0

Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process, which, in particular, comprises data collection and filtering. Selection bias in data collection or in data pre-processing may affect the model learned. Although model induction algorithms are designed to learn to generalize, they pursue optimization of predictive accuracy. It remains unclear how interpretability is instead impacted. We conduct an experimental analysis to investigate whether interpretable models are able to cope with data selection bias as far as interpretability is concerned.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
11/17/2016

GENESIM: genetic extraction of a single, interpretable model

Models obtained by decision tree induction techniques excel in being int...
research
07/21/2022

Learning Physics from the Machine: An Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator

The Majorana Demonstrator is a leading experiment searching for neutrino...
research
04/21/2021

Discovering Classification Rules for Interpretable Learning with Linear Programming

Rules embody a set of if-then statements which include one or more condi...
research
08/20/2019

TabNet: Attentive Interpretable Tabular Learning

We propose a novel high-performance interpretable deep tabular data lear...
research
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
research
11/22/2020

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

Existing approaches for the design of interpretable agent behavior consi...

Please sign up or login with your details

Forgot password? Click here to reset