A Strong Transfer Baseline for RGB-D Fusion in Vision Transformers

10/03/2022
by   Georgios Tziafas, et al.
0

The Vision Transformer (ViT) architecture has recently established its place in the computer vision literature, with multiple architectures for recognition of image data or other visual modalities. However, training ViTs for RGB-D object recognition remains an understudied topic, viewed in recent literature only through the lens of multi-task pretraining in multiple modalities. Such approaches are often computationally intensive and have not yet been applied for challenging object-level classification tasks. In this work, we propose a simple yet strong recipe for transferring pretrained ViTs in RGB-D domains for single-view 3D object recognition, focusing on fusing RGB and depth representations encoded jointly by the ViT. Compared to previous works in multimodal Transformers, the key challenge here is to use the atested flexibility of ViTs to capture cross-modal interactions at the downstream and not the pretraining stage. We explore which depth representation is better in terms of resulting accuracy and compare two methods for injecting RGB-D fusion within the ViT architecture (i.e., early vs. late fusion). Our results in the Washington RGB-D Objects dataset demonstrates that in such RGB → RGB-D scenarios, late fusion techniques work better than most popularly employed early fusion. With our transfer baseline, adapted ViTs score up to 95.1% top-1 accuracy in Washington, achieving new state-of-the-art results in this benchmark. We additionally evaluate our approach with an open-ended lifelong learning protocol, where we show that our adapted RGB-D encoder leads to features that outperform unimodal encoders, even without explicit fine-tuning. We further integrate our method with a robot framework and demonstrate how it can serve as a perception utility in an interactive robot learning scenario, both in simulation and with a real robot.

READ FULL TEXT

page 1

page 2

page 6

research
11/22/2021

Florence: A New Foundation Model for Computer Vision

Automated visual understanding of our diverse and open world demands com...
research
06/05/2018

Recurrent Convolutional Fusion for RGB-D Object Recognition

Providing machines with the ability to recognize objects like humans has...
research
07/24/2015

Multimodal Deep Learning for Robust RGB-D Object Recognition

Robust object recognition is a crucial ingredient of many, if not all, r...
research
03/24/2017

Feature Fusion using Extended Jaccard Graph and Stochastic Gradient Descent for Robot

Robot vision is a fundamental device for human-robot interaction and rob...
research
04/19/2022

Multimodal Token Fusion for Vision Transformers

Many adaptations of transformers have emerged to address the single-moda...
research
04/01/2019

The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots

Deep networks have brought significant advances in robot perception, ena...
research
01/20/2022

Omnivore: A Single Model for Many Visual Modalities

Prior work has studied different visual modalities in isolation and deve...

Please sign up or login with your details

Forgot password? Click here to reset