Compact Transformer Tracker with Correlative Masked Modeling

01/26/2023
by   Zikai Song, et al.
0

Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 fps. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at https://github.com/HUSTDML/CTTrack.

READ FULL TEXT

page 1

page 6

research
05/09/2021

TrTr: Visual Tracking with Transformer

Template-based discriminative trackers are currently the dominant tracki...
research
07/20/2022

AiATrack: Attention in Attention for Transformer Visual Tracking

Transformer trackers have achieved impressive advancements recently, whe...
research
08/01/2022

Local Perception-Aware Transformer for Aerial Tracking

Transformer-based visual object tracking has been utilized extensively. ...
research
05/31/2022

Joint Spatial-Temporal and Appearance Modeling with Transformer for Multiple Object Tracking

The recent trend in multiple object tracking (MOT) is heading towards le...
research
05/08/2022

SparseTT: Visual Tracking with Sparse Transformers

Transformers have been successfully applied to the visual tracking task ...
research
05/08/2022

Transformer Tracking with Cyclic Shifting Window Attention

Transformer architecture has been showing its great strength in visual o...
research
05/25/2023

MixFormerV2: Efficient Fully Transformer Tracking

Transformer-based trackers have achieved strong accuracy on the standard...

Please sign up or login with your details

Forgot password? Click here to reset