ExMo: Explainable AI Model using Inverse Frequency Decision Rules

by   Pradip Mainali, et al.

In this paper, we present a novel method to compute decision rules to build a more accurate interpretable machine learning model, denoted as ExMo. The ExMo interpretable machine learning model consists of a list of IF...THEN... statements with a decision rule in the condition. This way, ExMo naturally provides an explanation for a prediction using the decision rule that was triggered. ExMo uses a new approach to extract decision rules from the training data using term frequency-inverse document frequency (TF-IDF) features. With TF-IDF, decision rules with feature values that are more relevant to each class are extracted. Hence, the decision rules obtained by ExMo can distinguish the positive and negative classes better than the decision rules used in the existing Bayesian Rule List (BRL) algorithm, obtained using the frequent pattern mining approach. The paper also shows that ExMo learns a qualitatively better model than BRL. Furthermore, ExMo demonstrates that the textual explanation can be provided in a human-friendly way so that the explanation can be easily understood by non-expert users. We validate ExMo on several datasets with different sizes to evaluate its efficacy. Experimental validation on a real-world fraud detection application shows that ExMo is 20 than BRL and that it achieves accuracy similar to those of deep learning models.


page 1

page 2

page 3

page 4


Learning Accurate and Interpretable Decision Rule Sets from Neural Networks

This paper proposes a new paradigm for learning a set of independent log...

Pedagogical Rule Extraction for Learning Interpretable Models

Machine-learning models are ubiquitous. In some domains, for instance, i...

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Machine learning models may involve decision boundaries that change over...

SOAR: Simultaneous Or of And Rules for Classification of Positive Negative Classes

Algorithmic decision making has proliferated and now impacts our daily l...

Bayes Point Rule Set Learning

Interpretability is having an increasingly important role in the design ...

ESC-Rules: Explainable, Semantically Constrained Rule Sets

We describe a novel approach to explainable prediction of a continuous v...

Visualizing Rule Sets: Exploration and Validation of a Design Space

Rule sets are often used in Machine Learning (ML) as a way to communicat...

Please sign up or login with your details

Forgot password? Click here to reset