LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

11/15/2021
by   Weronika Hryniewska, et al.
16

The increased interest in deep learning applications, and their hard-to-detect biases result in the need to validate and explain complex models. However, current explanation methods are limited as far as both the explanation of the reasoning process and prediction results are concerned. They usually only show the location in the image that was important for model prediction. The lack of possibility to interact with explanations makes it difficult to verify and understand exactly how the model works. This creates a significant risk when using the model. It is compounded by the fact that explanations do not take into account the semantic meaning of the explained objects. To escape from the trap of static explanations, we propose an approach called LIMEcraft that allows a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance in case of many image features. Experiments on several models showed that our method improves model safety by inspecting model fairness for image pieces that may indicate model bias. The code is available at: http://github.com/MI2DataLab/LIMEcraft

READ FULL TEXT

page 3

page 4

page 6

page 8

research
11/22/2022

OCTET: Object-aware Counterfactual Explanations

Nowadays, deep vision models are being widely deployed in safety-critica...
research
11/25/2020

Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations

Most explanation methods in deep learning map importance estimates for a...
research
12/07/2021

Training Deep Models to be Explained with Fewer Examples

Although deep models achieve high predictive performance, it is difficul...
research
04/28/2023

Interpreting Vision and Language Generative Models with Semantic Visual Priors

When applied to Image-to-text models, interpretability methods often pro...
research
10/07/2021

Explanation as a process: user-centric construction of multi-level and multi-modal explanations

In the last years, XAI research has mainly been concerned with developin...
research
03/03/2021

Extracting Optimal Explanations for Ensemble Trees via Logical Reasoning

Ensemble trees are a popular machine learning model which often yields h...
research
12/17/2021

Global explainability in aligned image modalities

Deep learning (DL) models are very effective on many computer vision pro...

Please sign up or login with your details

Forgot password? Click here to reset