Dual-path Adaptation from Image to Video Transformers

03/17/2023
by   Jungin Park, et al.
0

In this paper, we efficiently transfer the surpassing representation power of the vision foundation models, such as ViT and Swin, for video understanding with only a few trainable parameters. Previous adaptation methods have simultaneously considered spatial and temporal modeling with a unified learnable module but still suffered from fully leveraging the representative capabilities of image transformers. We argue that the popular dual-path (two-stream) architecture in video models can mitigate this problem. We propose a novel DualPath adaptation separated into spatial and temporal adaptation paths, where a lightweight bottleneck adapter is employed in each transformer block. Especially for temporal dynamic modeling, we incorporate consecutive frames into a grid-like frameset to precisely imitate vision transformers' capability that extrapolates relationships between tokens. In addition, we extensively investigate the multiple baselines from a unified perspective in video understanding and compare them with DualPath. Experimental results on four action recognition benchmarks prove that pretrained image transformers with DualPath can be effectively generalized beyond the data domain.

READ FULL TEXT

page 4

page 5

page 14

research
07/01/2021

VideoLightFormer: Lightweight Action Recognition using Transformers

Efficient video action recognition remains a challenging problem. One la...
research
06/24/2021

Video Swin Transformer

The vision community is witnessing a modeling shift from CNNs to Transfo...
research
07/19/2022

Time Is MattEr: Temporal Self-supervision for Video Transformers

Understanding temporal dynamics of video is an essential aspect of learn...
research
08/25/2023

Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers

Vision Transformers achieve impressive accuracy across a range of visual...
research
11/23/2021

Efficient Video Transformers with Spatial-Temporal Token Selection

Video transformers have achieved impressive results on major video recog...
research
04/18/2023

SViTT: Temporal Learning of Sparse Video-Text Transformers

Do video-text transformers learn to model temporal relationships across ...
research
09/28/2022

DeViT: Deformed Vision Transformers in Video Inpainting

This paper proposes a novel video inpainting method. We make three main ...

Please sign up or login with your details

Forgot password? Click here to reset