Translate-to-Recognize Networks for RGB-D Scene Recognition

04/28/2019
by   Dapeng Du, et al.
4

Cross-modal transfer is helpful to enhance modality-specific discriminative power for scene recognition. To this end, this paper presents a unified framework to integrate the tasks of cross-modal translation and modality-specific recognition, termed as Translate-to-Recognize Network (TRecgNet). Specifically, both translation and recognition tasks share the same encoder network, which allows to explicitly regularize the training of recognition task with the help of translation, and thus improve its final generalization ability. For translation task, we place a decoder module on top of the encoder network and it is optimized with a new layer-wise semantic loss, while for recognition task, we use a linear classifier based on the feature embedding from encoder and its training is guided by the standard cross-entropy loss. In addition, our TRecgNet allows to exploit large numbers of unlabeled RGB-D data to train the translation task and thus improve the representation power of encoder network. Empirically, we verify that this new semi-supervised setting is able to further enhance the performance of recognition network. We perform experiments on two RGB-D scene recognition benchmarks: NYU Depth v2 and SUN RGB-D, demonstrating that TRecgNet achieves superior performance to the existing state-of-the-art methods, especially for recognition solely based on a single modality.

READ FULL TEXT

page 2

page 5

page 6

page 8

research
09/04/2023

Attention-Driven Multi-Modal Fusion: Enhancing Sign Language Recognition and Translation

In this paper, we devise a mechanism for the addition of multi-modal inf...
research
03/26/2021

Translate to Adapt: RGB-D Scene Recognition across Domains

Scene classification is one of the basic problems in computer vision res...
research
02/04/2023

TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Data

This work investigates the intersection of cross modal learning and semi...
research
08/18/2021

Specificity-preserving RGB-D Saliency Detection

RGB-D saliency detection has attracted increasing attention, due to its ...
research
09/13/2023

Multi-Modal Hybrid Learning and Sequential Training for RGB-T Saliency Detection

RGB-T saliency detection has emerged as an important computer vision tas...
research
03/24/2021

Learning Scene Structure Guidance via Cross-Task Knowledge Transfer for Single Depth Super-Resolution

Existing color-guided depth super-resolution (DSR) approaches require pa...
research
11/21/2021

TraVLR: Now You See It, Now You Don't! Evaluating Cross-Modal Transfer of Visio-Linguistic Reasoning

Numerous visio-linguistic (V+L) representation learning methods have bee...

Please sign up or login with your details

Forgot password? Click here to reset