LoRMIkA: Local Rule-based Model Interpretability with k-optimal Associations

08/11/2019
by   Dilini Rajapaksha, et al.
7

As we rely more and more on machine learning models for real-life decision-making, being able to understand and trust the predictions becomes ever more important. Local explainer models have recently been introduced to explain the predictions of complex machine learning models at the instance level. In this paper, we propose Local Rule-based Model Interpretability with k-optimal Associations (LoRMIkA), a novel model-agnostic approach that obtains k-optimal association rules from a neighborhood of the instance to be explained. Compared to other rule-based approaches in the literature, we argue that the most predictive rules are not necessarily the rules that provide the best explanations. Consequently, the LoRMIkA framework provides a flexible way to obtain predictive and interesting rules. It uses an efficient search algorithm guaranteed to find the k-optimal rules with respect to objectives such as strength, lift, leverage, coverage, and support. It also provides multiple rules which explain the decision and counterfactual rules, which give indications for potential changes to obtain different outputs for given instances. We compare our approach to other state-of-the-art approaches in local model interpretability on three different datasets, and achieve competitive results in terms of local accuracy and interpretability.

READ FULL TEXT

page 1

page 3

research
09/20/2023

A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making

In healthcare applications, understanding how machine/deep learning mode...
research
06/17/2020

Diverse Rule Sets

While machine-learning models are flourishing and transforming many aspe...
research
02/15/2022

LIMREF: Local Interpretable Model Agnostic Rule-based Explanations for Forecasting, with an Application to Electricity Smart Meter Data

Accurate electricity demand forecasts play a crucial role in sustainable...
research
11/16/2021

SMACE: A New Method for the Interpretability of Composite Decision Systems

Interpretability is a pressing issue for decision systems. Many post hoc...
research
04/03/2020

A rigorous method to compare interpretability of rule-based algorithms

Interpretability is becoming increasingly important in predictive model ...
research
05/27/2022

A Sea of Words: An In-Depth Analysis of Anchors for Text Data

Anchors [Ribeiro et al. (2018)] is a post-hoc, rule-based interpretabili...
research
11/03/2020

MAIRE – A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

The paper introduces a novel framework for extracting model-agnostic hum...

Please sign up or login with your details

Forgot password? Click here to reset