Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

05/17/2022
by   Arda Sahiner, et al.
14

Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks. However, the underpinning inductive bias of attention is not well understood. To address this issue, this paper analyzes attention through the lens of convex duality. For the non-linear dot-product self-attention, and alternative mechanisms such as MLP-mixer and Fourier Neural Operator (FNO), we derive equivalent finite-dimensional convex problems that are interpretable and solvable to global optimality. The convex programs lead to block nuclear-norm regularization that promotes low rank in the latent feature and token dimensions. In particular, we show how self-attention networks implicitly clusters the tokens, based on their latent similarity. We conduct experiments for transferring a pre-trained transformer backbone for CIFAR-100 classification by fine-tuning a variety of convex attention heads. The results indicate the merits of the bias induced by attention compared with the existing MLP or linear heads.

READ FULL TEXT
research
09/15/2022

Hydra Attention: Efficient Attention with Many Heads

While transformers have begun to dominate many tasks in vision, applying...
research
07/05/2022

Softmax-free Linear Transformers

Vision transformers (ViTs) have pushed the state-of-the-art for various ...
research
11/20/2022

Convexifying Transformers: Improving optimization and understanding of transformer networks

Understanding the fundamental mechanism behind the success of transforme...
research
10/09/2022

Fine-Tuning Pre-trained Transformers into Decaying Fast Weights

Autoregressive Transformers are strong language models but incur O(T) co...
research
03/22/2023

Multiscale Attention via Wavelet Neural Operators for Vision Transformers

Transformers have achieved widespread success in computer vision. At the...
research
08/31/2023

Transformers as Support Vector Machines

Since its inception in "Attention Is All You Need", transformer architec...
research
12/30/2019

Deep Reinforced Self-Attention Masks for Abstractive Summarization (DR.SAS)

We present a novel architectural scheme to tackle the abstractive summar...

Please sign up or login with your details

Forgot password? Click here to reset