DeepAI AI Chat
Log In Sign Up

Identifying the Most Explainable Classifier

by   Brett Mullins, et al.

We introduce the notion of pointwise coverage to measure the explainability properties of machine learning classifiers. An explanation for a prediction is a definably simple region of the feature space sharing the same label as the prediction, and the coverage of an explanation measures its size or generalizability. With this notion of explanation, we investigate whether or not there is a natural characterization of the most explainable classifier. According with our intuitions, we prove that the binary linear classifier is uniquely the most explainable classifier up to negligible sets.


page 1

page 2

page 3

page 4


The Shape of Explanations: A Topological Account of Rule-Based Explanations in Machine Learning

Rule-based explanations provide simple reasons explaining the behavior o...

ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets

The kepler and TESS missions have generated over 100,000 potential trans...

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...

Towards a Characterization of Explainable Systems

Building software-driven systems that are easily understood becomes a ch...

An Explainable Probabilistic Classifier for Categorical Data Inspired to Quantum Physics

This paper presents Sparse Tensor Classifier (STC), a supervised classif...

SEAT: Stable and Explainable Attention

Currently, attention mechanism becomes a standard fixture in most state-...