IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers

06/23/2021
by   Bowen Pan, et al.
0

The self-attention-based model, transformer, is recently becoming the leading backbone in the field of computer vision. In spite of the impressive success made by transformers in a variety of vision tasks, it still suffers from heavy computation and intensive memory cost. To address this limitation, this paper presents an Interpretability-Aware REDundancy REDuction framework (IA-RED^2). We start by observing a large amount of redundant computation, mainly spent on uncorrelated input patches, and then introduce an interpretable module to dynamically and gracefully drop these redundant patches. This novel framework is then extended to a hierarchical structure, where uncorrelated tokens at different stages are gradually removed, resulting in a considerable shrinkage of computational cost. We include extensive experiments on both image and video tasks, where our method could deliver up to 1.4X speed-up for state-of-the-art models like DeiT and TimeSformer, by only sacrificing less than 0.7 More importantly, contrary to other acceleration approaches, our method is inherently interpretable with substantial visual evidence, making vision transformer closer to a more human-understandable architecture while being lighter. We demonstrate that the interpretability that naturally emerged in our framework can outperform the raw attention learned by the original visual transformer, as well as those generated by off-the-shelf interpretation methods, with both qualitative and quantitative results. Project Page: http://people.csail.mit.edu/bpan/ia-red/.

READ FULL TEXT

page 2

page 6

page 8

page 14

page 15

page 16

research
11/30/2021

AdaViT: Adaptive Vision Transformers for Efficient Image Recognition

Built on top of self-attention mechanisms, vision transformers have demo...
research
02/15/2021

VA-RED^2: Video Adaptive Redundancy Reduction

Performing inference on deep learning models for videos remains a challe...
research
11/13/2022

Demystify Self-Attention in Vision Transformers from a Semantic Perspective: Analysis and Application

Self-attention mechanisms, especially multi-head self-attention (MSA), h...
research
06/23/2022

Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space

Humans are remarkably flexible in understanding viewpoint changes due to...
research
09/14/2023

Interpretability-Aware Vision Transformer

Vision Transformers (ViTs) have become prominent models for solving vari...
research
09/07/2022

Visual Transformer for Soil Classification

Our food security is built on the foundation of soil. Farmers would be u...
research
08/23/2023

Vision Transformer Adapters for Generalizable Multitask Learning

We introduce the first multitasking vision transformer adapters that lea...

Please sign up or login with your details

Forgot password? Click here to reset