Learning Shared Cross-modality Representation Using Multispectral-LiDAR and Hyperspectral Data

12/18/2019
by   Danfeng Hong, et al.
0

Due to the ever-growing diversity of the data source, multi-modality feature learning has attracted more and more attention. However, most of these methods are designed by jointly learning feature representation from multi-modalities that exist in both training and test sets, yet they are less investigated in absence of certain modality in the test phase. To this end, in this letter, we propose to learn a shared feature space across multi-modalities in the training process. By this way, the out-of-sample from any of multi-modalities can be directly projected onto the learned space for a more effective cross-modality representation. More significantly, the shared space is regarded as a latent subspace in our proposed method, which connects the original multi-modal samples with label information to further improve the feature discrimination. Experiments are conducted on the multispectral-Lidar and hyperspectral dataset provided by the 2018 IEEE GRSS Data Fusion Contest to demonstrate the effectiveness and superiority of the proposed method in comparison with several popular baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
12/30/2018

CoSpace: Common Subspace Learning from Hyperspectral-Multispectral Correspondences

With a large amount of open satellite multispectral imagery (e.g., Senti...
research
02/28/2020

Cross-modality Person re-identification with Shared-Specific Feature Transfer

Cross-modality person re-identification (cm-ReID) is a challenging but k...
research
04/02/2023

Multimodal Hyperspectral Image Classification via Interconnected Fusion

Existing multiple modality fusion methods, such as concatenation, summat...
research
10/11/2022

Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion

Technology videos contain rich multi-modal information. In cross-modal i...
research
03/28/2022

S2-Net: Self-supervision Guided Feature Representation Learning for Cross-Modality Images

Combining the respective advantages of cross-modality images can compens...
research
02/05/2021

3D Medical Multi-modal Segmentation Network Guided by Multi-source Correlation Constraint

In the field of multimodal segmentation, the correlation between differe...
research
01/09/2019

Learnable Manifold Alignment (LeMA) : A Semi-supervised Cross-modality Learning Framework for Land Cover and Land Use Classification

In this paper, we aim at tackling a general but interesting cross-modali...

Please sign up or login with your details

Forgot password? Click here to reset