DeepAI AI Chat
Log In Sign Up

Identifying the Most Explainable Classifier

10/18/2019
by   Brett Mullins, et al.
0

We introduce the notion of pointwise coverage to measure the explainability properties of machine learning classifiers. An explanation for a prediction is a definably simple region of the feature space sharing the same label as the prediction, and the coverage of an explanation measures its size or generalizability. With this notion of explanation, we investigate whether or not there is a natural characterization of the most explainable classifier. According with our intuitions, we prove that the binary linear classifier is uniquely the most explainable classifier up to negligible sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/22/2023

The Shape of Explanations: A Topological Account of Rule-Based Explanations in Machine Learning

Rule-based explanations provide simple reasons explaining the behavior o...
11/19/2021

ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets

The kepler and TESS missions have generated over 100,000 potential trans...
08/13/2019

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...
01/31/2019

Towards a Characterization of Explainable Systems

Building software-driven systems that are easily understood becomes a ch...
05/26/2021

An Explainable Probabilistic Classifier for Categorical Data Inspired to Quantum Physics

This paper presents Sparse Tensor Classifier (STC), a supervised classif...
11/23/2022

SEAT: Stable and Explainable Attention

Currently, attention mechanism becomes a standard fixture in most state-...