Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification

08/23/2023
by   Injae Kim, et al.
0

Interpretability is a crucial factor in building reliable models for various medical applications. Concept Bottleneck Models (CBMs) enable interpretable image classification by utilizing human-understandable concepts as intermediate targets. Unlike conventional methods that require extensive human labor to construct the concept set, recent works leveraging Large Language Models (LLMs) for generating concepts made automatic concept generation possible. However, those methods do not consider whether a concept is visually relevant or not, which is an important factor in computing meaningful concept scores. Therefore, we propose a visual activation score that measures whether the concept contains visual cues or not, which can be easily computed with unlabeled image data. Computed visual activation scores are then used to filter out the less visible concepts, thus resulting in a final concept set with visually meaningful concepts. Our experimental results show that adopting the proposed visual activation score for concept filtering consistently boosts performance compared to the baseline. Moreover, qualitative analyses also validate that visually relevant concepts are successfully selected with the visual activation score.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2021

Do Concept Bottleneck Models Learn as Intended?

Concept bottleneck models map from raw inputs to concepts, and then from...
research
06/21/2022

Automatic Concept Extraction for Concept Bottleneck-based Video Classification

Recent efforts in interpretable deep learning models have shown that con...
research
08/24/2023

Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions

Variational Information Pursuit (V-IP) is a framework for making interpr...
research
04/06/2021

Robust Semantic Interpretability: Revisiting Concept Activation Vectors

Interpretability methods for image classification assess model trustwort...
research
11/21/2022

Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification

Concept Bottleneck Models (CBM) are inherently interpretable models that...
research
12/18/2018

Interactive Naming for Explaining Deep Neural Networks: A Formative Study

We consider the problem of explaining the decisions of deep neural netwo...
research
06/06/2023

Scalable Concept Extraction in Industry 4.0

The industry 4.0 is leveraging digital technologies and machine learning...

Please sign up or login with your details

Forgot password? Click here to reset