Interpretability-Aware Vision Transformer

09/14/2023
by   Yao Qiang, et al.
0

Vision Transformers (ViTs) have become prominent models for solving various vision tasks. However, the interpretability of ViTs has not kept pace with their promising performance. While there has been a surge of interest in developing post hoc solutions to explain ViTs' outputs, these methods do not generalize to different downstream tasks and various transformer architectures. Furthermore, if ViTs are not properly trained with the given data and do not prioritize the region of interest, the post hoc methods would be less effective. Instead of developing another post hoc approach, we introduce a novel training procedure that inherently enhances model interpretability. Our interpretability-aware ViT (IA-ViT) draws inspiration from a fresh insight: both the class patch and image patches consistently generate predicted distributions and attention maps. IA-ViT is composed of a feature extractor, a predictor, and an interpreter, which are trained jointly with an interpretability-aware training objective. Consequently, the interpreter simulates the behavior of the predictor and provides a faithful explanation through its single-head self-attention mechanism. Our comprehensive experimental results demonstrate the effectiveness of IA-ViT in several image classification tasks, with both qualitative and quantitative evaluations of model performance and interpretability. Source code is available from: https://github.com/qiangyao1988/IA-ViT.

READ FULL TEXT

page 2

page 5

page 8

research
06/29/2022

Causality for Inherently Explainable Transformers: CAT-XPLAIN

There have been several post-hoc explanation approaches developed to exp...
research
04/13/2023

VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking

The lack of interpretability of the Vision Transformer may hinder its us...
research
07/05/2022

PoF: Post-Training of Feature Extractor for Improving Generalization

It has been intensively investigated that the local shape, especially fl...
research
01/24/2022

Patches Are All You Need?

Although convolutional networks have been the dominant architecture for ...
research
02/13/2022

BViT: Broad Attention based Vision Transformer

Recent works have demonstrated that transformer can achieve promising pe...
research
06/23/2021

IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers

The self-attention-based model, transformer, is recently becoming the le...
research
06/15/2023

Seeing the Pose in the Pixels: Learning Pose-Aware Representations in Vision Transformers

Human perception of surroundings is often guided by the various poses pr...

Please sign up or login with your details

Forgot password? Click here to reset