DeepAI AI Chat
Log In Sign Up

Siamese Transformer Pyramid Networks for Real-Time UAV Tracking

by   Daitao Xing, et al.
NYU college

Recent object tracking methods depend upon deep networks or convoluted architectures. Most of those trackers can hardly meet real-time processing requirements on mobile platforms with limited computing resources. In this work, we introduce the Siamese Transformer Pyramid Network (SiamTPN), which inherits the advantages from both CNN and Transformer architectures. Specifically, we exploit the inherent feature pyramid of a lightweight network (ShuffleNetV2) and reinforce it with a Transformer to construct a robust target-specific appearance model. A centralized architecture with lateral cross attention is developed for building augmented high-level feature maps. To avoid the computation and memory intensity while fusing pyramid representations with the Transformer, we further introduce the pooling attention module, which significantly reduces memory and time complexity while improving the robustness. Comprehensive experiments on both aerial and prevalent tracking benchmarks achieve competitive results while operating at high speed, demonstrating the effectiveness of SiamTPN. Moreover, our fastest variant tracker operates over 30 Hz on a single CPU-core and obtaining an AUC score of 58.1


page 2

page 4

page 7

page 8


Efficient Visual Tracking with Exemplar Transformers

The design of more complex and powerful neural network models has signif...

Pyramid Correlation based Deep Hough Voting for Visual Object Tracking

Most of the existing Siamese-based trackers treat tracking problem as a ...

ECO: Efficient Convolution Operators for Tracking

In recent years, Discriminative Correlation Filter (DCF) based methods h...

Keypoints Tracking via Transformer Networks

In this thesis, we propose a pioneering work on sparse keypoints trackin...

Efficient Visual Tracking via Hierarchical Cross-Attention Transformer

In recent years, target tracking has made great progress in accuracy. Th...

Divert More Attention to Vision-Language Tracking

Relying on Transformer for complex visual feature learning, object track...

MixFormerV2: Efficient Fully Transformer Tracking

Transformer-based trackers have achieved strong accuracy on the standard...