Efficient Learning of Interpretable Classification Rules

05/14/2022
by   Bishwamittra Ghosh, et al.
0

Machine learning has become omnipresent with applications in various safety-critical domains such as medical, law, and transportation. In these domains, high-stake decisions provided by machine learning necessitate researchers to design interpretable models, where the prediction is understandable to a human. In interpretable machine learning, rule-based classifiers are particularly effective in representing the decision boundary through a set of rules comprising input features. The interpretability of rule-based classifiers is in general related to the size of the rules, where smaller rules are considered more interpretable. To learn such a classifier, the brute-force direct approach is to consider an optimization problem that tries to learn the smallest classification rule that has close to maximum accuracy. This optimization problem is computationally intractable due to its combinatorial nature and thus, the problem is not scalable in large datasets. To this end, in this paper we study the triangular relationship among the accuracy, interpretability, and scalability of learning rule-based classifiers. The contribution of this paper is an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for synthesizing classification rules expressible in proposition logic. Despite the progress of MaxSAT solving in the last decade, the straightforward MaxSAT-based solution cannot scale. Therefore, we incorporate an efficient incremental learning technique inside the MaxSAT formulation by integrating mini-batch learning and iterative rule-learning. In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability. As an application, we deploy IMLI in learning popular interpretable classifiers such as decision lists and decision sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2018

On Cognitive Preferences and the Interpretability of Rule-based Models

It is conventional wisdom in machine learning and data mining that logic...
research
02/03/2021

A Scalable Two Stage Approach to Computing Optimal Decision Sets

Machine learning (ML) is ubiquitous in modern life. Since it is being de...
research
01/07/2020

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

The wide adoption of machine learning in the critical domains such as me...
research
08/23/2019

Preventing the Generation of Inconsistent Sets of Classification Rules

In recent years, the interest in interpretable classification models has...
research
12/22/2022

Machine Learning with Probabilistic Law Discovery: A Concise Introduction

Probabilistic Law Discovery (PLD) is a logic based Machine Learning meth...
research
10/15/2017

Multi-Value Rule Sets

We present the Multi-vAlue Rule Set (MARS) model for interpretable class...
research
09/30/2021

Scalable Rule-Based Representation Learning for Interpretable Classification

Rule-based models, e.g., decision trees, are widely used in scenarios de...

Please sign up or login with your details

Forgot password? Click here to reset