DeepAI AI Chat
Log In Sign Up

Explain Any Concept: Segment Anything Meets Concept-Based Explanation

05/17/2023
by   Ao Sun, et al.
University of Illinois at Urbana-Champaign
The Hong Kong University of Science and Technology
0

EXplainable AI (XAI) is an essential topic to improve human understanding of deep neural networks (DNNs) given their black-box internals. For computer vision tasks, mainstream pixel-based XAI methods explain DNN decisions by identifying important pixels, and emerging concept-based XAI explore forming explanations with concepts (e.g., a head in an image). However, pixels are generally hard to interpret and sensitive to the imprecision of XAI methods, whereas "concepts" in prior works require human annotation or are limited to pre-defined concept sets. On the other hand, driven by large-scale pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promotable framework for performing precise and comprehensive instance segmentation, enabling automatic preparation of concept sets from a given image. This paper for the first time explores using SAM to augment concept-based XAI. We offer an effective and flexible concept-based explanation method, namely Explain Any Concept (EAC), which explains DNN decisions with any concept. While SAM is highly effective and offers an "out-of-the-box" instance segmentation, it is costly when being integrated into defacto XAI pipelines. We thus propose a lightweight per-input equivalent (PIE) scheme, enabling efficient explanation with a surrogate model. Our evaluation over two popular datasets (ImageNet and COCO) illustrate the highly encouraging performance of EAC over commonly-used XAI methods.

READ FULL TEXT
05/16/2021

Expressive Explanations of DNNs by Combining Concept Analysis with ILP

Explainable AI has emerged to be a key component for black-box machine l...
02/21/2018

Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

In this paper we propose a novel method that provides contrastive explan...
11/27/2022

Foiling Explanations in Deep Neural Networks

Deep neural networks (DNNs) have greatly impacted numerous fields over t...
05/27/2023

Statistically Significant Concept-based Explanation of Image Classifiers via Model Knockoffs

A concept-based classifier can explain the decision process of a deep le...
11/30/2017

TCAV: Relative concept importance testing with Linear Concept Activation Vectors

Neural networks commonly offer high utility but remain difficult to inte...
11/19/2018

Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions

Explaining predictions of deep neural networks (DNNs) is an important an...