Vision Transformer Visualization: What Neurons Tell and How Neurons Behave?

10/14/2022
by   Van-Anh Nguyen, et al.
0

Recently vision transformers (ViT) have been applied successfully for various tasks in computer vision. However, important questions such as why they work or how they behave still remain largely unknown. In this paper, we propose an effective visualization technique, to assist us in exposing the information carried in neurons and feature embeddings across the ViT's layers. Our approach departs from the computational process of ViTs with a focus on visualizing the local and global information in input images and the latent feature embeddings at multiple levels. Visualizations at the input and embeddings at level 0 reveal interesting findings such as providing support as to why ViTs are rather generally robust to image occlusions and patch shuffling; or unlike CNNs, level 0 embeddings already carry rich semantic details. Next, we develop a rigorous framework to perform effective visualizations across layers, exposing the effects of ViTs filters and grouping/clustering behaviors to object patches. Finally, we provide comprehensive experiments on real datasets to qualitatively and quantitatively demonstrate the merit of our proposed methods as well as our findings. https://github.com/byM1902/ViT_visualization

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

page 16

page 17

page 18

research
02/27/2021

Transformer in Transformer

Transformer is a type of self-attention-based neural networks originally...
research
12/09/2021

Locally Shifted Attention With Early Global Integration

Recent work has shown the potential of transformers for computer vision ...
research
11/25/2021

NomMer: Nominate Synergistic Context in Vision Transformer for Visual Recognition

Recently, Vision Transformers (ViT), with the self-attention (SA) as the...
research
01/24/2022

Patches Are All You Need?

Although convolutional networks have been the dominant architecture for ...
research
12/13/2022

What do Vision Transformers Learn? A Visual Exploration

Vision transformers (ViTs) are quickly becoming the de-facto architectur...
research
10/22/2022

NeuroMapper: In-browser Visualizer for Neural Network Training

We present our ongoing work NeuroMapper, an in-browser visualization too...
research
04/13/2023

VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking

The lack of interpretability of the Vision Transformer may hinder its us...

Please sign up or login with your details

Forgot password? Click here to reset