Explaining the Predictions of Any Image Classifier via Decision Trees

11/04/2019
by   Sheng Shi, et al.
94

Despite outstanding contribution to the significant progress of Artificial Intelligence (AI), deep learning models remain mostly black boxes, which are extremely weak in explainability of the reasoning process and prediction results. Explainability is not only a gateway between AI and society but also a powerful tool to detect flaws in the model and biases in the data. Local Interpretable Model-agnostic Explanation (LIME) is a recent approach that uses a linear regression model to form a local explanation for the individual prediction result. However, being so restricted and usually oversimplifying the relationships, linear models fail in situations where nonlinear associations and interactions exist among features and prediction results. This paper proposes an extended Decision Tree-based LIME (TLIME) approach, which uses a decision tree model to form an interpretable representation that is locally faithful to the original model. The new approach can capture nonlinear interactions among features in the data and creates plausible explanations. Various experiments show that the TLIME explanation of multiple blackbox models can achieve more reliable performance in terms of understandability, fidelity, and efficiency.

READ FULL TEXT

page 2

page 3

page 4

research
02/18/2020

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

Explainability is a gateway between Artificial Intelligence and society ...
research
04/26/2020

An Extension of LIME with Improvement of Interpretability and Fidelity

While deep learning makes significant achievements in Artificial Intelli...
research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
04/06/2023

Retention Is All You Need

Skilled employees are usually seen as the most important pillar of an or...
research
07/21/2023

Morphological Image Analysis and Feature Extraction for Reasoning with AI-based Defect Detection and Classification Models

As the use of artificial intelligent (AI) models becomes more prevalent ...
research
02/02/2020

Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation

Deep learning models trained using massive amounts of data tend to captu...
research
07/25/2023

ForestMonkey: Toolkit for Reasoning with AI-based Defect Detection and Classification Models

Artificial intelligence (AI) reasoning and explainable AI (XAI) tasks ha...

Please sign up or login with your details

Forgot password? Click here to reset