ALIME: Autoencoder Based Approach for Local Interpretability

Machine learning and especially deep learning have garneredtremendous popularity in recent years due to their increased performanceover other methods. The availability of large amount of data has aidedin the progress of deep learning. Nevertheless, deep learning models areopaque and often seen as black boxes. Thus, there is an inherent need tomake the models interpretable, especially so in the medical domain. Inthis work, we propose a locally interpretable method, which is inspiredby one of the recent tools that has gained a lot of interest, called localinterpretable model-agnostic explanations (LIME). LIME generates singleinstance level explanation by artificially generating a dataset aroundthe instance (by randomly sampling and using perturbations) and thentraining a local linear interpretable model. One of the major issues inLIME is the instability in the generated explanation, which is caused dueto the randomly generated dataset. Another issue in these kind of localinterpretable models is the local fidelity. We propose novel modificationsto LIME by employing an autoencoder, which serves as a better weightingfunction for the local model. We perform extensive comparisons withdifferent datasets and show that our proposed method results in bothimproved stability, as well as local fidelity.

READ FULL TEXT
research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
04/26/2020

An Extension of LIME with Improvement of Interpretability and Fidelity

While deep learning makes significant achievements in Artificial Intelli...
research
02/18/2020

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

Explainability is a gateway between Artificial Intelligence and society ...
research
06/10/2020

OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

Local Interpretable Model-Agnostic Explanations (LIME) is a popular meth...
research
06/24/2019

DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

Local Interpretable Model-Agnostic Explanations (LIME) is a popular tech...
research
06/02/2018

Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP)

Supervised Machine Learning (SML) algorithms such as Gradient Boosting, ...
research
09/04/2020

Towards Musically Meaningful Explanations Using Source Separation

Deep neural networks (DNNs) are successfully applied in a wide variety o...

Please sign up or login with your details

Forgot password? Click here to reset