Learning to Evaluate Performance of Multi-modal Semantic Localization

09/14/2022
by   Zhiqiang Yuan, et al.
2

Semantic localization (SeLo) refers to the task of obtaining the most relevant locations in large-scale remote sensing (RS) images using semantic information such as text. As an emerging task based on cross-modal retrieval, SeLo achieves semantic-level retrieval with only caption-level annotation, which demonstrates its great potential in unifying downstream tasks. Although SeLo has been carried out successively, but there is currently no work has systematically explores and analyzes this urgent direction. In this paper, we thoroughly study this field and provide a complete benchmark in terms of metrics and testdata to advance the SeLo task. Firstly, based on the characteristics of this task, we propose multiple discriminative evaluation metrics to quantify the performance of the SeLo task. The devised significant area proportion, attention shift distance, and discrete attention distance are utilized to evaluate the generated SeLo map from pixel-level and region-level. Next, to provide standard evaluation data for the SeLo task, we contribute a diverse, multi-semantic, multi-objective Semantic Localization Testset (AIR-SLT). AIR-SLT consists of 22 large-scale RS images and 59 test cases with different semantics, which aims to provide a comprehensive evaluations for retrieval models. Finally, we analyze the SeLo performance of RS cross-modal retrieval models in detail, explore the impact of different variables on this task, and provide a complete benchmark for the SeLo task. We have also established a new paradigm for RS referring expression comprehension, and demonstrated the great advantage of SeLo in semantics through combining it with tasks such as detection and road extraction. The proposed evaluation metrics, semantic localization testsets, and corresponding scripts have been open to access at github.com/xiaoyuan1996/SemanticLocalizationMetrics .

READ FULL TEXT

page 1

page 4

page 7

page 10

page 11

page 12

page 13

page 14

research
01/20/2022

Deep Unsupervised Contrastive Hashing for Large-Scale Cross-Modal Text-Image Retrieval in Remote Sensing

Due to the availability of large-scale multi-modal data (e.g., satellite...
research
04/19/2022

Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing

The development of cross-modal retrieval systems that can search and ret...
research
04/21/2022

Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Remote sensing (RS) cross-modal text-image retrieval has attracted exten...
research
04/09/2019

CMIR-NET : A Deep Learning Based Model For Cross-Modal Retrieval In Remote Sensing

We address the problem of cross-modal information retrieval in the domai...
research
04/21/2022

Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

Cross-modal remote sensing text-image retrieval (RSCTIR) has recently be...
research
09/12/2020

RGB2LIDAR: Towards Solving Large-Scale Cross-Modal Visual Localization

We study an important, yet largely unexplored problem of large-scale cro...

Please sign up or login with your details

Forgot password? Click here to reset