Efficient Attention-free Video Shift Transformers

08/23/2022
by   Adrian Bulat, et al.
0

This paper tackles the problem of efficient video recognition. In this area, video transformers have recently dominated the efficiency (top-1 accuracy vs FLOPs) spectrum. At the same time, there have been some attempts in the image domain which challenge the necessity of the self-attention operation within the transformer architecture, advocating the use of simpler approaches for token mixing. However, there are no results yet for the case of video recognition, where the self-attention operator has a significantly higher impact (compared to the case of images) on efficiency. To address this gap, in this paper, we make the following contributions: (a) we construct a highly efficient & accurate attention-free block based on the shift operator, coined Affine-Shift block, specifically designed to approximate as closely as possible the operations in the MHSA block of a Transformer layer. Based on our Affine-Shift block, we construct our Affine-Shift Transformer and show that it already outperforms all existing shift/MLP–based architectures for ImageNet classification. (b) We extend our formulation in the video domain to construct Video Affine-Shift Transformer (VAST), the very first purely attention-free shift-based video transformer. (c) We show that VAST significantly outperforms recent state-of-the-art transformers on the most popular action recognition benchmarks for the case of models with low computational and memory footprint. Code will be made available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2022

Spatiotemporal Self-attention Modeling with Temporal Patch Shift for Action Recognition

Transformer-based methods have recently achieved great advancement on 2D...
research
06/24/2021

Video Swin Transformer

The vision community is witnessing a modeling shift from CNNs to Transfo...
research
06/10/2021

Space-time Mixing Attention for Video Transformer

This paper is on video recognition using Transformers. Very recent attem...
research
06/14/2021

S^2-MLP: Spatial-Shift MLP Architecture for Vision

Recently, visual Transformer (ViT) and its following works abandon the c...
research
12/20/2019

Axial Attention in Multidimensional Transformers

We propose Axial Transformers, a self-attention-based autoregressive mod...
research
11/13/2022

SCOTCH and SODA: A Transformer Video Shadow Detection Framework

Shadows in videos are difficult to detect because of the large shadow de...
research
09/15/2023

Cure the headache of Transformers via Collinear Constrained Attention

As the rapid progression of practical applications based on Large Langua...

Please sign up or login with your details

Forgot password? Click here to reset