Locally Interpretable Model Agnostic Explanations using Gaussian Processes

08/16/2021
by   Aditya Saini, et al.
0

Owing to tremendous performance improvements in data-intensive domains, machine learning (ML) has garnered immense interest in the research community. However, these ML models turn out to be black boxes, which are tough to interpret, resulting in a direct decrease in productivity. Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique for explaining the prediction of a single instance. Although LIME is simple and versatile, it suffers from instability in the generated explanations. In this paper, we propose a Gaussian Process (GP) based variation of locally interpretable models. We employ a smart sampling strategy based on the acquisition functions in Bayesian optimization. Further, we employ the automatic relevance determination based covariance function in GP, with separate length-scale parameters for each feature, where the reciprocal of lengthscale parameters serve as feature explanations. We illustrate the performance of the proposed technique on two real-world datasets, and demonstrate the superior stability of the proposed technique. Furthermore, we demonstrate that the proposed technique is able to generate faithful explanations using much fewer samples as compared to LIME.

READ FULL TEXT
research
10/10/2022

Local Interpretable Model Agnostic Shap Explanations for machine learning models

With the advancement of technology for artificial intelligence (AI) base...
research
04/01/2020

Ontology-based Interpretable Machine Learning for Textual Data

In this paper, we introduce a novel interpreting framework that learns a...
research
06/24/2019

DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

Local Interpretable Model-Agnostic Explanations (LIME) is a popular tech...
research
05/24/2023

Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models

We present a novel approach for explaining Gaussian processes (GPs) that...
research
11/03/2020

MAIRE – A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

The paper introduces a novel framework for extracting model-agnostic hum...
research
10/06/2021

Shapley variable importance clouds for interpretable machine learning

Interpretable machine learning has been focusing on explaining final mod...

Please sign up or login with your details

Forgot password? Click here to reset