Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay

by   Joao Marques-Silva, et al.

Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations


page 1

page 2

page 3

page 4


On Explaining Random Forests with SAT

Random Forest (RFs) are among the most widely used Machine Learning (ML)...

Provably Precise, Succinct and Efficient Explanations for Decision Trees

Decision trees (DTs) embody interpretable classifiers. DTs have been adv...

On Computing Relevant Features for Explaining NBCs

Despite the progress observed with model-agnostic explainable AI (XAI), ...

On the Tractability of SHAP Explanations

SHAP explanations are a popular feature-attribution mechanism for explai...

A Symbolic Approach to Explaining Bayesian Network Classifiers

We propose an approach for explaining Bayesian network classifiers, whic...

Explanations for Monotonic Classifiers

In many classification tasks there is a requirement of monotonicity. Con...

Efficient Explanations With Relevant Sets

Recent work proposed δ-relevant inputs (or sets) as a probabilistic expl...

Please sign up or login with your details

Forgot password? Click here to reset