MLIC: A MaxSAT-Based framework for learning interpretable classification rules

12/05/2018
by   Dmitry Malioutov, et al.
0

The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios, we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy. The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. The key goal of the paper is to excite researchers in both interpretable classification and in the CP community to take it further and propose richer formulations, and to develop bespoke solvers attuned to the problem of interpretable ML.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/07/2020

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

The wide adoption of machine learning in the critical domains such as me...
research
08/26/2022

A Framework for Inherently Interpretable Optimization Models

With dramatic improvements in optimization software, the solution of lar...
research
05/13/2023

A Novel Memetic Strategy for Optimized Learning of Classification Trees

Given the increasing interest in interpretable machine learning, classif...
research
01/30/2023

Optimal Decision Trees For Interpretable Clustering with Constraints

Constrained clustering is a semi-supervised task that employs a limited ...
research
06/01/2023

Loss-Optimal Classification Trees: A Generalized Framework and the Logistic Case

The Classification Tree (CT) is one of the most common models in interpr...
research
09/25/2010

Extracting Symbolic Rules for Medical Diagnosis Problem

Neural networks (NNs) have been successfully applied to solve a variety ...
research
07/28/2021

The Reasonable Crowd: Towards evidence-based and interpretable models of driving behavior

Autonomous vehicles must balance a complex set of objectives. There is n...

Please sign up or login with your details

Forgot password? Click here to reset