Towards Evaluating Explanations of Vision Transformers for Medical Imaging

04/12/2023
by   Piotr Komorowski, et al.
0

As deep learning models increasingly find applications in critical domains such as medical imaging, the need for transparent and trustworthy decision-making becomes paramount. Many explainability methods provide insights into how these models make predictions by attributing importance to input features. As Vision Transformer (ViT) becomes a promising alternative to convolutional neural networks for image classification, its interpretability remains an open research question. This paper investigates the performance of various interpretation methods on a ViT applied to classify chest X-ray images. We introduce the notion of evaluating faithfulness, sensitivity, and complexity of ViT explanations. The obtained results indicate that Layerwise relevance propagation for transformers outperforms Local interpretable model-agnostic explanations and Attention visualization, providing a more accurate and reliable representation of what a ViT has actually learned. Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.

READ FULL TEXT

page 2

page 4

page 8

research
07/20/2023

Is Grad-CAM Explainable in Medical Images?

Explainable Deep Learning has gained significant attention in the field ...
research
07/19/2022

Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for COVID-19 Screening With Chest Radiography

Building AI models with trustworthiness is important especially in regul...
research
07/16/2023

SHAMSUL: Simultaneous Heatmap-Analysis to investigate Medical Significance Utilizing Local interpretability methods

The interpretability of deep neural networks has become a subject of gre...
research
06/10/2022

Learning to Estimate Shapley Values with Vision Transformers

Transformers have become a default architecture in computer vision, but ...
research
09/11/2023

Evaluating the Reliability of CNN Models on Classifying Traffic and Road Signs using LIME

The objective of this investigation is to evaluate and contrast the effe...
research
08/06/2020

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Saliency maps have become a widely used method to make deep learning mod...
research
04/13/2021

Fast Hierarchical Games for Image Explanations

As modern complex neural networks keep breaking records and solving hard...

Please sign up or login with your details

Forgot password? Click here to reset