eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised Semantic Segmentation

07/12/2022
by   Lu Yu, et al.
0

Recently vision transformer models have become prominent models for a range of vision tasks. These models, however, are usually opaque with weak feature interpretability. Moreover, there is no method currently built for an intrinsically interpretable transformer, which is able to explain its reasoning process and provide a faithful explanation. To close these crucial gaps, we propose a novel vision transformer dubbed the eXplainable Vision Transformer (eX-ViT), an intrinsically interpretable transformer model that is able to jointly discover robust interpretable features and perform the prediction. Specifically, eX-ViT is composed of the Explainable Multi-Head Attention (E-MHA) module, the Attribute-guided Explainer (AttE) module and the self-supervised attribute-guided loss. The E-MHA tailors explainable attention weights that are able to learn semantically interpretable representations from local patches in terms of model decisions with noise robustness. Meanwhile, AttE is proposed to encode discriminative attribute features for the target object through diverse attribute discovery, which constitutes faithful evidence for the model's predictions. In addition, a self-supervised attribute-guided loss is developed for our eX-ViT, which aims at learning enhanced representations through the attribute discriminability mechanism and attribute diversity mechanism, to localize diverse and discriminative attributes and generate more robust explanations. As a result, we can uncover faithful and robust interpretations with diverse attributes through the proposed eX-ViT.

READ FULL TEXT

page 8

page 17

page 19

page 20

page 22

research
09/29/2020

Where is the Model Looking At?–Concentrate and Explain the Network Attention

Image classification models have achieved satisfactory performance on ma...
research
09/16/2019

BMVC 2019: Workshop on Interpretable and Explainable Machine Vision

Proceedings of the BMVC 2019 Workshop on Interpretable and Explainable M...
research
07/30/2021

Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning

Zero-Shot Learning (ZSL) aims to recognise unseen object classes, which ...
research
12/10/2021

LCTR: On Awakening the Local Continuity of Transformer for Weakly Supervised Object Localization

Weakly supervised object localization (WSOL) aims to learn object locali...
research
06/15/2022

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Deep learning models have achieved remarkable success in different areas...
research
03/16/2020

Self-Supervised Discovering of Causal Features: Towards Interpretable Reinforcement Learning

Deep reinforcement learning (RL) has recently led to many breakthroughs ...
research
06/13/2018

Interpretable Partitioned Emebedding for Customized Fashion Outfit Composition

Intelligent fashion outfit composition becomes more and more popular in ...

Please sign up or login with your details

Forgot password? Click here to reset