Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

04/21/2022
by   Zhiqiang Yuan, et al.
0

Cross-modal remote sensing text-image retrieval (RSCTIR) has recently become an urgent research hotspot due to its ability of enabling fast and flexible information extraction on remote sensing (RS) images. However, current RSCTIR methods mainly focus on global features of RS images, which leads to the neglect of local features that reflect target relationships and saliency. In this article, we first propose a novel RSCTIR framework based on global and local information (GaLR), and design a multi-level information dynamic fusion (MIDF) module to efficaciously integrate features of different levels. MIDF leverages local information to correct global information, utilizes global information to supplement local information, and uses the dynamic addition of the two to generate prominent visual representation. To alleviate the pressure of the redundant targets on the graph convolution network (GCN) and to improve the model s attention on salient instances during modeling local features, the de-noised representation matrix and the enhanced adjacency matrix (DREA) are devised to assist GCN in producing superior local representations. DREA not only filters out redundant features with high similarity, but also obtains more powerful local features by enhancing the features of prominent objects. Finally, to make full use of the information in the similarity matrix during inference, we come up with a plug-and-play multivariate rerank (MR) algorithm. The algorithm utilizes the k nearest neighbors of the retrieval results to perform a reverse search, and improves the performance by combining multiple components of bidirectional retrieval. Extensive experiments on public datasets strongly demonstrate the state-of-the-art performance of GaLR methods on the RSCTIR task. The code of GaLR method, MR algorithm, and corresponding files have been made available at https://github.com/xiaoyuan1996/GaLR .

READ FULL TEXT

page 1

page 4

page 12

page 13

page 16

research
04/21/2022

Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Remote sensing (RS) cross-modal text-image retrieval has attracted exten...
research
04/19/2022

Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing

The development of cross-modal retrieval systems that can search and ret...
research
07/23/2017

Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation

Remote sensing (RS) image retrieval based on visual content is of great ...
research
02/23/2022

A Novel Self-Supervised Cross-Modal Image Retrieval Method In Remote Sensing

Due to the availability of multi-modal remote sensing (RS) image archive...
research
12/12/2022

Scale-Semantic Joint Decoupling Network for Image-text Retrieval in Remote Sensing

Image-text retrieval in remote sensing aims to provide flexible informat...
research
09/14/2022

Learning to Evaluate Performance of Multi-modal Semantic Localization

Semantic localization (SeLo) refers to the task of obtaining the most re...

Please sign up or login with your details

Forgot password? Click here to reset