Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

04/07/2022
by   Niloofar Ranjbar, et al.
0

Nowadays, deep neural networks are being used in many domains because of their high accuracy results. However, they are considered as "black box", means that they are not explainable for humans. On the other hand, in some tasks such as medical, economic, and self-driving cars, users want the model to be interpretable to decide if they can trust these results or not. In this work, we present a modified version of an autoencoder-based approach for local interpretability called ALIME. The ALIME itself is inspired by a famous method called Local Interpretable Model-agnostic Explanations (LIME). LIME generates a single instance level explanation by generating new data around the instance and training a local linear interpretable model. ALIME uses an autoencoder to weigh the new data around the sample. Nevertheless, the ALIME uses a linear model as the interpretable model to be trained locally, just like the LIME. This work proposes a new approach, which uses a decision tree instead of the linear model, as the interpretable model. We evaluate the proposed model in case of stability, local fidelity, and interpretability on different datasets. Compared to ALIME, the experiments show significant results on stability and local fidelity and improved results on interpretability.

READ FULL TEXT

page 1

page 7

research
09/04/2019

ALIME: Autoencoder Based Approach for Local Interpretability

Machine learning and especially deep learning have garneredtremendous po...
research
07/15/2020

VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven Model Interpretability Applied to the Ironmaking Industry

Machine learning applied to generate data-driven models are lacking of t...
research
10/22/2018

Assessing the Stability of Interpretable Models

Interpretable classification models are built with the purpose of provid...
research
11/04/2019

Explaining the Predictions of Any Image Classifier via Decision Trees

Despite outstanding contribution to the significant progress of Artifici...
research
06/17/2019

Learning Interpretable Models Using an Oracle

As Machine Learning (ML) becomes pervasive in various real world systems...
research
11/19/2022

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

This paper evaluates whether training a decision tree based on concepts ...
research
12/03/2020

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

Interpretable machine learning addresses the black-box nature of deep ne...

Please sign up or login with your details

Forgot password? Click here to reset