simCrossTrans: A Simple Cross-Modality Transfer Learning for Object Detection with ConvNets or Vision Transformers

03/20/2022
by   Xiaoke Shen, et al.
0

Transfer learning is widely used in computer vision (CV), natural language processing (NLP) and achieves great success. Most transfer learning systems are based on the same modality (e.g. RGB image in CV and text in NLP). However, the cross-modality transfer learning (CMTL) systems are scarce. In this work, we study CMTL from 2D to 3D sensor to explore the upper bound performance of 3D sensor only systems, which play critical roles in robotic navigation and perform well in low light scenarios. While most CMTL pipelines from 2D to 3D vision are complicated and based on Convolutional Neural Networks (ConvNets), ours is easy to implement, expand and based on both ConvNets and Vision transformers(ViTs): 1) By converting point clouds to pseudo-images, we can use an almost identical network from pre-trained models based on 2D images. This makes our system easy to implement and expand. 2) Recently ViTs have been showing good performance and robustness to occlusions, one of the key reasons for poor performance of 3D vision systems. We explored both ViT and ConvNet with similar model sizes to investigate the performance difference. We name our approach simCrossTrans: simple cross-modality transfer learning with ConvNets or ViTs. Experiments on SUN RGB-D dataset show: with simCrossTrans we achieve 13.2% and 16.1% absolute performance gain based on ConvNets and ViTs separately. We also observed the ViTs based performs 9.7% better than the ConvNets one, showing the power of simCrossTrans with ViT. simCrossTrans with ViTs surpasses the previous state-of-the-art (SOTA) by a large margin of +15.4% mAP50. Compared with the previous 2D detection SOTA based RGB images, our depth image only system only has a 1% gap. The code, training/inference logs and models are publicly available at https://github.com/liketheflower/simCrossTrans

READ FULL TEXT

page 1

page 2

page 3

page 10

page 11

research
07/03/2022

You Only Need One Detector: Unified Object Detector for Different Modalities based on Vision Transformers

Most systems use different models for different modalities, such as one ...
research
08/20/2019

LXMERT: Learning Cross-Modality Encoder Representations from Transformers

Vision-and-language reasoning requires an understanding of visual concep...
research
07/05/2023

Interactive Image Segmentation with Cross-Modality Vision Transformers

Interactive image segmentation aims to segment the target from the backg...
research
09/14/2023

NineRec: A Benchmark Dataset Suite for Evaluating Transferable Recommendation

Learning a recommender system model from an item's raw modality features...
research
01/04/2021

Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling

Semantic Role Labeling (SRL) is a core Natural Language Processing task....
research
05/24/2023

Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights

Adapters, a plug-in neural network module with some tunable parameters, ...
research
11/18/2020

EasyTransfer – A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

The literature has witnessed the success of applying deep Transfer Learn...

Please sign up or login with your details

Forgot password? Click here to reset