Is Space-Time Attention All You Need for Video Understanding?

02/09/2021
by   Gedas Bertasius, et al.
0

We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically different design compared to the prominent paradigm of 3D convolutional architectures for video, TimeSformer achieves state-of-the-art results on several major action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Furthermore, our model is faster to train and has higher test-time efficiency compared to competing architectures. Code and pretrained models will be made publicly available.

READ FULL TEXT

page 4

page 8

research
07/27/2022

Spatiotemporal Self-attention Modeling with Temporal Patch Shift for Action Recognition

Transformer-based methods have recently achieved great advancement on 2D...
research
04/17/2021

Higher Order Recurrent Space-Time Transformer

Endowing visual agents with predictive capability is a key step towards ...
research
03/17/2023

Leaping Into Memories: Space-Time Deep Feature Synthesis

The success of deep learning models has led to their adaptation and adop...
research
11/02/2021

Relational Self-Attention: What's Missing in Attention for Video Understanding

Convolution has been arguably the most important feature transform for m...
research
07/24/2022

Object State Change Classification in Egocentric Videos using the Divided Space-Time Attention Mechanism

This report describes our submission called "TarHeels" for the Ego4D: Ob...
research
11/17/2022

UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer

Learning discriminative spatiotemporal representation is the key problem...
research
10/07/2013

Early Fire Detection Using HEP and Space-time Analysis

In this article, a video base early fire alarm system is developed by mo...

Please sign up or login with your details

Forgot password? Click here to reset