Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay

08/13/2020
by   Joao Marques-Silva, et al.
20

Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations

READ FULL TEXT

page 1

page 2

page 3

page 4

05/21/2021

On Explaining Random Forests with SAT

Random Forest (RFs) are among the most widely used Machine Learning (ML)...
05/19/2022

Provably Precise, Succinct and Efficient Explanations for Decision Trees

Decision trees (DTs) embody interpretable classifiers. DTs have been adv...
07/11/2022

On Computing Relevant Features for Explaining NBCs

Despite the progress observed with model-agnostic explainable AI (XAI), ...
09/18/2020

On the Tractability of SHAP Explanations

SHAP explanations are a popular feature-attribution mechanism for explai...
05/09/2018

A Symbolic Approach to Explaining Bayesian Network Classifiers

We propose an approach for explaining Bayesian network classifiers, whic...
06/01/2021

Efficient Explanations With Relevant Sets

Recent work proposed δ-relevant inputs (or sets) as a probabilistic expl...
06/01/2021

Explanations for Monotonic Classifiers

In many classification tasks there is a requirement of monotonicity. Con...