Reason to explain: Interactive contrastive explanations (REASONX)

05/29/2023
by   Laura State, et al.
0

Many high-performing machine learning models are not interpretable. As they are increasingly used in decision scenarios that can critically affect individuals, it is necessary to develop tools to better understand their outputs. Popular explanation methods include contrastive explanations. However, they suffer several shortcomings, among others an insufficient incorporation of background knowledge, and a lack of interactivity. While (dialogue-like) interactivity is important to better communicate an explanation, background knowledge has the potential to significantly improve their quality, e.g., by adapting the explanation to the needs of the end-user. To close this gap, we present REASONX, an explanation tool based on Constraint Logic Programming (CLP). REASONX provides interactive contrastive explanations that can be augmented by background knowledge, and allows to operate under a setting of under-specified information, leading to increased flexibility in the provided explanations. REASONX computes factual and constrative decision rules, as well as closest constrative examples. It provides explanations for decision trees, which can be the ML models under analysis, or global/local surrogate models of any ML model. While the core part of REASONX is built on CLP, we also provide a program layer that allows to compute the explanations via Python, making the tool accessible to a wider audience. We illustrate the capability of REASONX on a synthetic data set, and on a a well-developed example in the credit domain. In both cases, we can show how REASONX can be flexibly used and tailored to the needs of the user.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2023

Declarative Reasoning on Explanations Using Constraint Logic Programming

Explaining opaque Machine Learning (ML) models is an increasingly releva...
research
02/19/2020

Learning Global Transparent Models from Local Contrastive Explanations

There is a rich and growing literature on producing local point wise con...
research
06/28/2023

Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions

Model-agnostic feature attributions can provide local insights in comple...
research
11/13/2017

Learning Abduction under Partial Observability

Juba recently proposed a formulation of learning abductive reasoning fro...
research
07/13/2022

Policy Optimization with Sparse Global Contrastive Explanations

We develop a Reinforcement Learning (RL) framework for improving an exis...
research
08/20/2021

VAE-CE: Visual Contrastive Explanation using Disentangled VAEs

The goal of a classification model is to assign the correct labels to da...
research
07/08/2020

Just in Time: Personal Temporal Insights for Altering Model Decisions

The interpretability of complex Machine Learning models is coming to be ...

Please sign up or login with your details

Forgot password? Click here to reset