Unsupervised Cross-Modal Distillation for Thermal Infrared Tracking

07/31/2021
by   Jingxian Sun, et al.
8

The target representation learned by convolutional neural networks plays an important role in Thermal Infrared (TIR) tracking. Currently, most of the top-performing TIR trackers are still employing representations learned by the model trained on the RGB data. However, this representation does not take into account the information in the TIR modality itself, limiting the performance of TIR tracking. To solve this problem, we propose to distill representations of the TIR modality from the RGB modality with Cross-Modal Distillation (CMD) on a large amount of unlabeled paired RGB-TIR data. We take advantage of the two-branch architecture of the baseline tracker, i.e. DiMP, for cross-modal distillation working on two components of the tracker. Specifically, we use one branch as a teacher module to distill the representation learned by the model into the other branch. Benefiting from the powerful model in the RGB modality, the cross-modal distillation can learn the TIR-specific representation for promoting TIR tracking. The proposed approach can be incorporated into different baseline trackers conveniently as a generic and independent component. Furthermore, the semantic coherence of paired RGB and TIR images is utilized as a supervised signal in the distillation loss for cross-modal knowledge transfer. In practice, three different approaches are explored to generate paired RGB-TIR patches with the same semantics for training in an unsupervised way. It is easy to extend to an even larger scale of unlabeled training data. Extensive experiments on the LSOTB-TIR dataset and PTB-TIR dataset demonstrate that our proposed cross-modal distillation method effectively learns TIR-specific target representations transferred from the RGB modality. Our tracker outperforms the baseline tracker by achieving absolute gains of 2.3 respectively.

READ FULL TEXT
research
07/02/2015

Cross Modal Distillation for Supervision Transfer

In this work we propose a technique that transfers supervision between i...
research
09/20/2019

CNN-based RGB-D Salient Object Detection: Learn, Select and Fuse

The goal of this work is to present a systematic solution for RGB-D sali...
research
03/30/2023

Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection

Temporal action detection aims to predict the time intervals and the cla...
research
12/08/2020

Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting

Crowd counting is a fundamental yet challenging problem, which desires r...
research
01/26/2023

Self-Supervised RGB-T Tracking with Cross-Input Consistency

In this paper, we propose a self-supervised RGB-T tracking method. Diffe...
research
07/09/2023

Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

This paper addresses the problem of cross-modal object tracking from RGB...
research
04/08/2017

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

This paper presents a novel method for detecting pedestrians under adver...

Please sign up or login with your details

Forgot password? Click here to reset