IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

01/07/2020
by   Bishwamittra Ghosh, et al.
0

The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end users to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability. Motivated by the success of MaxSAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, the runtime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e., a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances? In this paper, we take a step towards answering the above question in affirmation. We propose IMLI: an incremental approach to MaxSAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

MLIC: A MaxSAT-Based framework for learning interpretable classification rules

The wide adoption of machine learning approaches in the industry, govern...
research
05/14/2022

Efficient Learning of Interpretable Classification Rules

Machine learning has become omnipresent with applications in various saf...
research
06/20/2018

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

Several researchers have argued that a machine learning system's interpr...
research
04/09/2021

Individual Explanations in Machine Learning Models: A Survey for Practitioners

In recent years, the use of sophisticated statistical models that influe...
research
09/13/2019

A Double Penalty Model for Interpretability

Modern statistical learning techniques have often emphasized prediction ...
research
07/07/2022

A unified interpretable intelligent learning diagnosis framework for smart education

Intelligent learning diagnosis is a critical engine of smart education, ...
research
07/12/2022

Revealing Unfair Models by Mining Interpretable Evidence

The popularity of machine learning has increased the risk of unfair mode...

Please sign up or login with your details

Forgot password? Click here to reset