Keyword-Based Diverse Image Retrieval by Semantics-aware Contrastive Learning and Transformer

05/06/2023
by   Minyi Zhao, et al.
0

In addition to relevance, diversity is an important yet less studied performance metric of cross-modal image retrieval systems, which is critical to user experience. Existing solutions for diversity-aware image retrieval either explicitly post-process the raw retrieval results from standard retrieval systems or try to learn multi-vector representations of images to represent their diverse semantics. However, neither of them is good enough to balance relevance and diversity. On the one hand, standard retrieval systems are usually biased to common semantics and seldom exploit diversity-aware regularization in training, which makes it difficult to promote diversity by post-processing. On the other hand, multi-vector representation methods are not guaranteed to learn robust multiple projections. As a result, irrelevant images and images of rare or unique semantics may be projected inappropriately, which degrades the relevance and diversity of the results generated by some typical algorithms like top-k. To cope with these problems, this paper presents a new method called CoLT that tries to generate much more representative and robust representations for accurately classifying images. Specifically, CoLT first extracts semantics-aware image features by enhancing the preliminary representations of an existing one-to-one cross-modal system with semantics-aware contrastive learning. Then, a transformer-based token classifier is developed to subsume all the features into their corresponding categories. Finally, a post-processing algorithm is designed to retrieve images from each category to form the final retrieval result. Extensive experiments on two real-world datasets Div400 and Div150Cred show that CoLT can effectively boost diversity, and outperforms the existing methods as a whole (with a higher F1 score).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2022

ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval

Visual appearance is considered to be the most important cue to understa...
research
06/22/2021

Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval

Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) is a novel cross-modal ...
research
04/19/2022

Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing

The development of cross-modal retrieval systems that can search and ret...
research
05/20/2021

More Than Just Attention: Learning Cross-Modal Attentions with Contrastive Constraints

Attention mechanisms have been widely applied to cross-modal tasks such ...
research
11/30/2022

Improving Cross-Modal Retrieval with Set of Diverse Embeddings

Cross-modal retrieval across image and text modalities is a challenging ...
research
08/28/2023

Extending Cross-Modal Retrieval with Interactive Learning to Improve Image Retrieval Performance in Forensics

Nowadays, one of the critical challenges in forensics is analyzing the e...
research
08/23/2023

Progressive Feature Mining and External Knowledge-Assisted Text-Pedestrian Image Retrieval

Text-Pedestrian Image Retrieval aims to use the text describing pedestri...

Please sign up or login with your details

Forgot password? Click here to reset