Scale-Semantic Joint Decoupling Network for Image-text Retrieval in Remote Sensing

12/12/2022
by   Chengyu Zheng, et al.
0

Image-text retrieval in remote sensing aims to provide flexible information for data analysis and application. In recent years, state-of-the-art methods are dedicated to “scale decoupling” and “semantic decoupling” strategies to further enhance the capability of representation. However, these previous approaches focus on either the disentangling scale or semantics but ignore merging these two ideas in a union model, which extremely limits the performance of cross-modal retrieval models. To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval. Specifically, we design the Bidirectional Scale Decoupling (BSD) module, which exploits Salience Feature Extraction (SFE) and Salience-Guided Suppression (SGS) units to adaptively extract potential features and suppress cumbersome features at other scales in a bidirectional pattern to yield different scale clues. Besides, we design the Label-supervised Semantic Decoupling (LSD) module by leveraging the category semantic labels as prior knowledge to supervise images and texts probing significant semantic-related information. Finally, we design a Semantic-guided Triple Loss (STL), which adaptively generates a constant to adjust the loss function to improve the probability of matching the same semantic image and text and shorten the convergence time of the retrieval model. Our proposed SSJDN outperforms state-of-the-art approaches in numerical experiments conducted on four benchmark remote sensing datasets.

READ FULL TEXT

page 1

page 5

page 13

page 14

research
05/09/2022

Improved-Flow Warp Module for Remote Sensing Semantic Segmentation

Remote sensing semantic segmentation aims to assign automatically each p...
research
09/10/2019

Deep Hashing Learning for Visual and Semantic Retrieval of Remote Sensing Images

Driven by the urgent demand for managing remote sensing big data, large-...
research
04/09/2019

CMIR-NET : A Deep Learning Based Model For Cross-Modal Retrieval In Remote Sensing

We address the problem of cross-modal information retrieval in the domai...
research
04/21/2022

Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Remote sensing (RS) cross-modal text-image retrieval has attracted exten...
research
04/21/2022

Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

Cross-modal remote sensing text-image retrieval (RSCTIR) has recently be...
research
09/14/2023

A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing

Remote sensing images are essential for many earth science applications,...
research
07/07/2023

General-Purpose Multimodal Transformer meets Remote Sensing Semantic Segmentation

The advent of high-resolution multispectral/hyperspectral sensors, LiDAR...

Please sign up or login with your details

Forgot password? Click here to reset