LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

05/04/2020
by   Kacper Sokol, et al.
2

Systems based on artificial intelligence and machine learning models should be transparent, in the sense of being capable of explaining their decisions to gain humans' approval and trust. While there are a number of explainability techniques that can be used to this end, many of them are only capable of outputting a single one-size-fits-all explanation that simply cannot address all of the explainees' diverse needs. In this work we introduce a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree, which employs surrogate multi-output regression trees. We validate our algorithm on a deep neural network trained for object detection in images and compare it against Local Interpretable Model-agnostic Explanations (LIME). Our method comes with local fidelity guarantees and can produce a range of diverse explanation types, including contrastive and counterfactual explanations praised in the literature. Some of these explanations can be interactively personalised to create bespoke, meaningful and actionable insights into the model's behaviour. While other methods may give an illusion of customisability by wrapping, otherwise static, explanations in an interactive interface, our explanations are truly interactive, in the sense of allowing the user to "interrogate" a black-box model. LIMEtree can therefore produce consistent explanations on which an interactive exploratory process can be built.

READ FULL TEXT

page 6

page 10

page 23

page 24

page 25

page 27

research
10/29/2019

bLIMEy: Surrogate Prediction Explanations Beyond LIME

Surrogate explainers of black-box machine learning predictions are of pa...
research
07/02/2023

CLIMAX: An exploration of Classifier-Based Contrastive Explanations

Explainable AI is an evolving area that deals with understanding the dec...
research
06/10/2021

On the overlooked issue of defining explanation objectives for local-surrogate explainers

Local surrogate approaches for explaining machine learning model predict...
research
01/27/2020

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

The need for transparency of predictive systems based on Machine Learnin...
research
11/17/2021

Uncertainty Quantification of Surrogate Explanations: an Ordinal Consensus Approach

Explainability of black-box machine learning models is crucial, in parti...
research
05/26/2021

Fooling Partial Dependence via Data Poisoning

Many methods have been developed to understand complex predictive models...
research
11/01/2021

Gradient Frequency Modulation for Visually Explaining Video Understanding Models

In many applications, it is essential to understand why a machine learni...

Please sign up or login with your details

Forgot password? Click here to reset