MeLIME: Meaningful Local Explanation for Machine Learning Models

09/12/2020
by   Tiago Botari, et al.
73

Most state-of-the-art machine learning algorithms induce black-box models, preventing their application in many sensitive domains. Hence, many methodologies for explaining machine learning models have been proposed to address this problem. In this work, we introduce strategies to improve local explanations taking into account the distribution of the data used to train the black-box models. We show that our approach, MeLIME, produces more meaningful explanations compared to other techniques over different ML models, operating on various types of data. MeLIME generalizes the LIME method, allowing more flexible perturbation sampling and the use of different local interpretable models. Additionally, we introduce modifications to standard training algorithms of local interpretable models fostering more robust explanations, even allowing the production of counterfactual examples. To show the strengths of the proposed approach, we include experiments on tabular data, images, and text; all showing improved explanations. In particular, MeLIME generated more meaningful explanations on the MNIST dataset than methods such as GuidedBackprop, SmoothGrad, and Layer-wise Relevance Propagation. MeLIME is available on https://github.com/tiagobotari/melime.

READ FULL TEXT
research
12/20/2020

Explaining Black-box Models for Biomedical Text Classification

In this paper, we propose a novel method named Biomedical Confident Item...
research
08/27/2021

This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation

Current machine learning models have shown high efficiency in solving a ...
research
07/28/2023

SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems

A CF explainer identifies the minimum modifications in the input that wo...
research
03/13/2023

Don't PANIC: Prototypical Additive Neural Network for Interpretable Classification of Alzheimer's Disease

Alzheimer's disease (AD) has a complex and multifactorial etiology, whic...
research
04/11/2017

Interpretable Explanations of Black Boxes by Meaningful Perturbation

As machine learning algorithms are increasingly applied to high impact y...
research
05/25/2023

Counterfactual Explainer Framework for Deep Reinforcement Learning Models Using Policy Distillation

Deep Reinforcement Learning (DRL) has demonstrated promising capability ...

Please sign up or login with your details

Forgot password? Click here to reset